Loading...
Office

Table of Content

    20 August 2022 Volume 49 Issue 4
      
    Information and Communications Engineering
    Deep learning reconstruction algorithm for incomplete samples of frequency hopping communication signals
    QI Peihan,LI Bing,XIE Aiping,GAO Xianglan
    Journal of Xidian University. 2022, 49(4):  1-7.  doi:10.19665/j.issn1001-2400.2022.04.001
    Abstract ( 919 )   HTML ( 262 )   PDF (1114KB) ( 583 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Compressed spectrum sensing can obtain broadband frequency hopping(FH) communications signals at a rate much lower than that by the Nyquist sampling,but under-sampling signal reconstruction,as a key component of compressed spectrum sensing,will more directly determine the receiving performance of FH communication.Aiming at the problems of low under-sampling reconstruction accuracy,high computational complexity,and long iterative reconstruction time in compressed FH communications signal sensing,this paper proposes a deep learning reconstruction method for the under-sampling FH communication signal.In the proposed method,deep learning is introduced into the reconstruction of the wideband sparse under-sampling signal,a suitable input layer network structure is designed for under-sampling samples,and then a generative reconstruction network is constructed to replace sparse optimization.Finally,under-sampling signal reconstruction without iteration is realized.The influence of parameters such as under-sampling structure configuration,network model setting,and signal-to-noise ratio on signal reconstruction performance is simulated.Simulation results show that compared with the classical sparsity adaptive matching pursuit(SAMP),orthogonal matching pursuit(OMP) signal reconstruction algorithms,and the under-sampling signal reconstruction method based on the convolutional neural network,the proposed method has a better performance in reconstruction error and reconstruction time.The proposed method can reconstruct the under-sampling FH communications signal accurately,efficiently,and in real time,and can be one of the effective ways to solve the bottleneck of receiving and processing signals of wideband FH communications.

    Improved CG iterative algorithm in massive MIMO systems
    LIU Gang,LOU Zengjin,LIN Qinhua,GUO Yi
    Journal of Xidian University. 2022, 49(4):  8-15.  doi:10.19665/j.issn1001-2400.2022.04.002
    Abstract ( 625 )   HTML ( 110 )   PDF (975KB) ( 213 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The massive multiple input multiple output technology becomes one of the key technologies of mobile systems at present and even in the future due to its high spectrum efficiency and energy efficiency.However,as the number of user antennas increases,the complexity of the detection algorithm will increase sharply,which makes the algorithm difficult to implement quickly and effectively in the actual system.An improved conjugate gradient iterative algorithm is proposed to solve the problems of high computational complexity and slow convergence.Under the condition of channel hardening,in this algorithm the initial matrix of the Richardson iterative algorithm is used as the initial matrix of the Newton iterative algorithm for one iteration,and the result of the Newton iterative algorithm is used as the initial matrix of the Conjugate Gradient iterative algorithm for further iteration.Through these steps,the algorithm not only keeps the computational complexity low,but also accelerates the convergence speed.The computational complexity of the algorithm is quantitatively analyzed by theory,and the bit error rate performance and convergence speed of the algorithm are compared with other typical detection algorithms through simulation experiments.Simulation results show that the proposed algorithm has a lower computational complexity and faster convergence speed compared with other schemes.When the modulation mode is 64QAM and the antenna size is 32×256 or 64×1 024,the detection performance is close to the minimum mean square error algorithm only after 3 iterations.

    Efficient communication method for transient electromagnetic field computation on unstructured grids
    LI Minxuan,JIANG Shugang,WU Qingkai,LIN Zhongchao
    Journal of Xidian University. 2022, 49(4):  16-23.  doi:10.19665/j.issn1001-2400.2022.04.003
    Abstract ( 356 )   HTML ( 24 )   PDF (1334KB) ( 118 )   Save
    Figures and Tables | References | Related Articles | Metrics

    A minimum communication cycle strategy for large-scale parallel computation of transient electromagnetic fields is presented to solve the communication complexity of the discontinuous Galerkin time domain method using unstructured grids in parallel computation.The topology of point-to-point communication between processors is mapped to a communication matrix.By using the non-interference feature of communication between non-associated processes when the communication buffer is not full,the non-interference processors communication sequence is sorted,the simultaneous communication in each roundtrip is recorded as the same communication cycle,and the communication matrix is refilled.The minimum communication cycle strategy recursively takes the remainder of each element for the initial communication matrix,and after each recursion,processors that communicate simultaneously in the communication cycle are obtained,excluding the corresponding elements of these processors from the next recursion until all elements of the initial communication matrix are sorted.Minimum communication cycle strategy can effectively reduce the total number of communication cycles in the parallel iterative computing process,reduce the time consumed by the communication processor,and thus improve the computational efficiency of the algorithm.Compared with traditional strategies,the number of communication cycles of the minimum communication cycle strategy is reduced to 3%,which significantly improves the parallel efficiency and reduces the computing time.At the same time,the parallel efficiency of 70.38% (tenfold expansion) is achieved when using this strategy for calculation with 8 000 core groups (8 000 processors,520 000 cores) on the light of the Sunway Taihu-Light supercomputer.

    Research on the propagation model and communication performance of the 5G millimeter wave at the rain attenuation link
    YANG Ruike,GAO Xia,WU Fuping,LI Renxian,ZHOU Ye
    Journal of Xidian University. 2022, 49(4):  24-30.  doi:10.19665/j.issn1001-2400.2022.04.004
    Abstract ( 542 )   HTML ( 103 )   PDF (719KB) ( 154 )   Save
    Figures and Tables | References | Related Articles | Metrics

    For all-weather application requirements for 5G communication systems,the probability distribution models of the attenuation rate (specific attenuation) on the MMW propagating link and rain attenuation channel state are proposed under rainfall environment.The observed rainfall data in Beijing and Haikou during 1951~2019 are statistically processed and analyzed by the K-S goodness of the fit test method,indicating that the rainfall rate distribution better obeys the Weibull distribution in typical areas of China.Based on the Weibull distribution of the rainfall rate and the calculation method for the rain attenuation rate,the probability density function model of rain attenuation on the MMW communication link is derived.Considering the propagation properties of millimeter waves in the rainfall channel,according to the rain attenuation probability density function,the probability distribution model of the rain attenuation channel state is obtained,which can effectively predict the channel attenuation state at the 5G link under rainfall environment.Furthermore,the average BER,channel capacity and outage probability of MMW communication at the rainfall link are analyzed under M-QAM modulation,which is of great practical value to the promotion and application of the MMW communication network in rainfall environment and the link design of telecommunication operators.

    Design and implementation of the VLC digital baseband system based on FPGA
    WANG Hetong,NIU Shuqiang,SHI Huili,WANG Ping,GUO Lixin,LIU Zhongyu
    Journal of Xidian University. 2022, 49(4):  31-38.  doi:10.19665/j.issn1001-2400.2022.04.005
    Abstract ( 590 )   HTML ( 35 )   PDF (1966KB) ( 192 )   Save
    Figures and Tables | References | Related Articles | Metrics

    There are two physical layer technologies mainly used in visible light communication (VLC) including asymmetrically clipped optical orthogonal frequency division multiplexing (OFDM) and direct current biased optical OFDM,but they can only select either spectral or power efficiency.To balance them,a VLC digital baseband system with the adaptively biased optical OFDM scheme is designed and implemented by the field programmable gate array (FPGA) hardware platform.First of all,the adaptively biased optical OFDM scheme is offline implemented by MATLAB to verify the feasibility of the indoor VLC system using the optical OFDM signal,with the parameters used in this system determined.And then,the design scheme for the VLC digital baseband system is introduced.Finally,based on the digital baseband system,the indoor VLC hardware platform is built by FPGA,with the main modules in the transmitter and receiver presented.Test results on the platform illustrate that the proposed VLC digital baseband system can effectively improve the utilization of spectrum resources and reduce system energy consumption.Meanwhile,the special frame structure design in the physical layer can not only enhance the ability of channel estimation and equalization,but also improve the accuracy of Fast Fourier Transform window detection,which plays a dramatically important role in reducing the bit error rate,guaranteeing the communication transmission rate,and improving the overall digital baseband system performance.

    DDPG method for joint beamforming and power control in mmwave communication
    LI Zhongjie,GAO Wei,XIONG Jiyuan,LI Jianghong
    Journal of Xidian University. 2022, 49(4):  39-48.  doi:10.19665/j.issn1001-2400.2022.04.006
    Abstract ( 445 )   HTML ( 29 )   PDF (1180KB) ( 138 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The majority of existing beamforming algorithms rely heavily on the quality of instantaneous channel state information (CSI),which is unsuitable for practical systems and ignores power control issues,resulting in serious inter-user interference and lowering the spectrum efficiency.A deep reinforcement learning-based joint beamforming and power control technique is proposed to jointly tackle the beamforming design and power control problems without the requirement for perfect CSI.First,an information exchange protocol is proposed to facilitate the base station to understand environmental information,and a dual-model system with a centralized training distributed execution structure is designed to solve the joint optimization problem.The cloud utilizes Deep Q-Learning (DQN) to design the beamforming after receiving the local samples uploaded by the base station,which is collected and uploaded to the cloud by the base station.Considering that deep Q learning is not applicable to continuous variables,we employ the deep deterministic strategy gradient algorithm(DDPG) to solve the power control problem.The cloud model is broadcast to all base stations for distributed execution in order to acquire local samples once the training is completed.Simulation results show that the proposed scheme for spectral efficiency optimization significantly outperforms the traditional beamforming algorithm.

    Blockchain-assisted vehicle reputation management method for VANET
    ZHANG Haibo,BIAN Xia,XU Yongjun,XIANG Shengting,HE Xiaofan
    Journal of Xidian University. 2022, 49(4):  49-59.  doi:10.19665/j.issn1001-2400.2022.04.007
    Abstract ( 384 )   HTML ( 172 )   PDF (2373KB) ( 114 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The Vehicular Ad-hoc NETworks (VANET) is a special Mobile Ad-hoc NETworks (MANET) with vehicles as nodes which have high mobility.The vehicles interact with each other through the network to achieve massive data sharing,so as to improve the transportation efficiency and safety of the Intelligent Transportation System (ITS).However,the existence of malicious vehicles brings serious security risks to the VANET and even to the entire transportation system.To address this problem,an improved Three Valued Subjective Logic (3VSL) algorithm is proposed to evaluate the reputation value of vehicles and identify malicious vehicles using reputation thresholds,and the trusted route search algorithm is used to improve the calculation accuracy.The method uses blockchain technology to store the trust database in a distributed manner,and at the same time guarantees the immutability of the data.It periodically updates the vehicle reputation value by combining the historical periodic reputation value of vehicle nodes,historical interaction information,and interaction frequency.In addition,the depth-first search (DFS) algorithm is used to determine the trust path between vehicles more precisely,and the six-degree spatial separation theory is used to solve the problem of low information due to the long trust path,and reputation thresholds are set to filter trust paths with a low information volume,which further improves the computational accuracy.Simulation results show that,compared with the traditional algorithms,the proposed algorithm has a significant improvement in the identification efficiency of malicious vehicles,and shows a good anti-attack performance in the face of group collusion attacks and On-off attacks.

    Intent-driven autonomous driving networking technology
    LENG Changfa,YANG Chungang,PENG Yao
    Journal of Xidian University. 2022, 49(4):  60-70.  doi:10.19665/j.issn1001-2400.2022.04.008
    Abstract ( 630 )   HTML ( 32 )   PDF (1663KB) ( 142 )   Save
    Figures and Tables | References | Related Articles | Metrics

    End-users,vertical industries,content providers,etc.present higher requirements for future network flexibility and intelligence.The diversification of network services and the complexity of network architectures urgently require the autonomy of network planning,management,operation and optimization,and ultimately achieve a high degree of autonomy in network intent and service intent.To reduce the complexity of network management and enhance the degree of network self-optimization,new technologies such as automation,intent-driven networks,artificial intelligence and automatic policy generation need to be explored and utilized to realize autonomous driving networks.The autonomous driving network takes the intent-driven network as the vision of the current network evolution,and relies on its intent translation technology and strategy generation verification technology to realize the efficient operation and management of the network based on the intent.In addition,it combines artificial intelligence technology to reduce the cost of manual operation and maintenance and improve the efficiency of network management optimization.This paper summarizes the research background of autonomous driving networks,clarifies the definition and advantages of autonomous driving networks and then,proposes a new autonomous driving network control architecture,implementation process and key technologies.Finally,a typical autonomous driving network application example is designed to demonstrate the feasibility of realizing the vision of autonomous driving networks and clarify the development direction of autonomous driving networks.The autonomous driving network provides users with more flexible and efficient network management and control capabilities,effectively realizes user demands and improves user experience.

    Access point selection matching localization algorithm based on fuzzy clustering
    QIN Ningning,ZHANG Chenchen
    Journal of Xidian University. 2022, 49(4):  71-81.  doi:10.19665/j.issn1001-2400.2022.04.009
    Abstract ( 302 )   HTML ( 11 )   PDF (3267KB) ( 89 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Considering the problem that it is difficult for the traditional clustering method to divide physical space effectively,and that the positioning error is large due to instability of the signal source,aiming at reducing the storage cost of database and improving the quality of the fingerprint,this paper proposes a simplified access point matching location algorithm based on fuzzy clustering.According to the proposed algorithm,in the offline stage,the target space of a large area is divided into multiple overlapping fuzzy partitions by the characteristics of the signal source,the stability,visibility,redundancy and other multi-scale characteristics of the signal source in each partition are comprehensively considered,the smallest access point identification set in the area is established,the positioning speed is improved and the defects of mismatch caused by the unstable access point are decreased.In the position calculation stage,the traditional Euclidean distance is improved by assigning the weight of the neighbor points in combination with the stability characteristics of the regional access points,and the speed constraint relationship between adjacent moments during the movement of the user to be located is used to filter the positioning outliers,to overcome the changes in the environment and signal sources.The unfavorable influence of the locating error is reduced.Tested on actual scenes,the proposed algorithm reduces the computational consumption of the positioning algorithm on the premise of effectively screening the access points,while significantly reducing the offline data storage and controlling the average positioning error of the positioning scene within 1m.Compared with the existing classic positioning methods,the positioning accuracy of this algorithm is improved by more than 15%.

    Compressed sensing of subsampled dynamic signals during high-speed machining
    HE Wangpeng,CHEN Binqiang,LI Yang,CHEN Jing,GUO Baolong
    Journal of Xidian University. 2022, 49(4):  82-89.  doi:10.19665/j.issn1001-2400.2022.04.010
    Abstract ( 230 )   HTML ( 93 )   PDF (2522KB) ( 73 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the problem of spectrum aliasing of the cutting force caused by unreasonable setting of sampling parameters and filtering steepness of anti-aliasing filtering in the high-speed machining condition monitoring system,a spectrum sensing method based on the principle of frequency domain approximate sparseness is proposed.The non-linearity of the machining system and sampling process makes the output waveform of the monitoring system contain high order harmonics,which shows obvious approximate sparsity under the Fourier matrix (with the amplitude of most elements in the Fourier coefficients approximately zero,and the energy mainly concentrated in several frequency points).Several spectrum subsets are the result of spectrum approximation by retaining few spectral lines with a large amplitude only.Using the principle of subsampling mixing,the true frequency range of each spectrum subset is calculated,and the true frequency spectrum of the cutting force is corrected.According to the time-domain waveform characteristics of the subset of frequency bands,a general Linear Amplitude Modulation Sinusoidal Wave (LAMSW) model is constructed.The effectiveness of the proposed method is verified by LAMSW simulation analysis and high-speed milling aluminum alloy experiments.The results show that the proposed method is effective enough to recover the true waveform of the milling force signal,and that the relative envelope error between the recovery time-domain wave and the test signal is less than 4%.The research results provide a certain engineering and technical support for applying the sparse theory to the analysis of subsampling signals.

    Computer Science and Technology
    Discrimination and structure preserved cross-domain subspace learning for unsupervised domain adaption
    TAO Yang,YANG Na,GUO Tan
    Journal of Xidian University. 2022, 49(4):  90-99.  doi:10.19665/j.issn1001-2400.2022.04.011
    Abstract ( 289 )   HTML ( 18 )   PDF (2146KB) ( 88 )   Save
    Figures and Tables | References | Related Articles | Metrics

    One popular transfer learning method is domain adaptation based on feature representation.However,such a method fails to consider the within-class and between-class relations after obtaining the new data representation.In addition,the geometric structure information of the data is lost during the conversion process.To overcome this problem,a novel discrimination and structure preserved cross-domain subspace learning for the unsupervised domain adaption method is developed in this paper.Under the framework of low rank subspace learning,this method improves the classification accuracy of the classifier in the target domain by considering both the discrimination information of the source domain and the structural information of the data.Specifically,the method finds an invariant subspace between the source domain and the target domain,and uses cross domain sample reconstruction learning with low rank constraints to reduce the difference of cross domain distribution.In the process of migration,the label relaxation matrix is used to maximize the inter class distance of samples of different categories in the source domain and the sparse constraint between classes in the source domain to reduce the distance from samples of the same category and effectively retain the discrimination information of the source domain.At the same time,the adaptive probability graph structure is used to retain the local nearest neighbor relationship of samples,capture the geometric structure information at the bottom of the data,and enhance the discrimination and robustness of subspace learning.Experiments on three different cross-domain image data sets verify the effectiveness of the proposed method.Experimental results show that the classification performance of the proposed algorithm is better than that of the existing methods.

    Impaired behavior recognition by using the multi-head-siamese neural network
    MA Lun,LIU Xin,ZHAO Bin,WANG Ruiping,LIAO Guisheng,ZHANG Yajing
    Journal of Xidian University. 2022, 49(4):  100-108.  doi:10.19665/j.issn1001-2400.2022.04.012
    Abstract ( 503 )   HTML ( 86 )   PDF (1597KB) ( 108 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Impaired behavior recognition is an important branch of human activity recognition,which refers to harmful behavior of people with special needs.Aiming at the problem that the correlation between sensors is not taken into account when recognizing the impaired behavior by using multi-sensor devices equipped on different parts of the human body,based on deep learning theory,this paper proposes a multi-head-siamese neural network to characterize the relation between sensors,which builds multiple sub-networks for consistent feature extraction.The extracted features are fused and recognized by the classifier on the basis of the weight sharing idea.In the presented network,the upsampling operation is first employed to fill the missing collected data,and the data is then standardized to improve the recognition accuracy.Besides,the network hyperparameters are adjusted by the Bayesian optimization.In addition,due to the over-fitting problem when recognizing impaired behavior by introducing the Adam optimizer,L2 regularization is performed by using the AdamW optimizer,thus further improving the recognition accuracy.Processing results of raw data show that the network achieves a classification accuracy of 96.0%.Compared with the baseline network and single input network,the accuracy of the proposed network increases by 6.1% and 8.8%,respectively,and it could reduce the possibility of incorrect prediction.Compared with the multiple input network,its accuracy increases by 2.4%,and it reduces the number of training parameters by 92%.It is proved that this network is effective for impaired behavior recognition in terms of utilizing the relationship between sensors.

    Micro-video multi-label classification method based on multi-modal feature encoding
    JING Peiguang,LI Yaxin,SU Yuting
    Journal of Xidian University. 2022, 49(4):  109-117.  doi:10.19665/j.issn1001-2400.2022.04.013
    Abstract ( 489 )   HTML ( 19 )   PDF (1532KB) ( 136 )   Save
    Figures and Tables | References | Related Articles | Metrics

    With the popularization of smart phones and the mobile Internet,micro-videos have been developed rapidly as a new form of user generated contents (UGCs).Browsing micro-videos has become one of the most popular entertainment methods.Micro-video has natural relevance in modalities and semantics.How to make full use of this correlation is the key to micro-video representation learning.Aiming at better solving multi-label classification tasks,a modal subspace encoding algorithm is proposed,which integrates subspace coding for multi-modal and label semantic relevance learning in a unified framework.The proposed algorithm uses the subspace coding network to obtain a complete common representation by modeling the consistency and complementary of modalities and meanwhile the redundancy and noise information are reduced further,so that the common and complete representations of multimodal fusion are obtained.Furthermore,the graph convolutional network is used to construct a label correlation matrix to learn the semantic relevance and representations of labels,which are used to guide the multi-label classification task.Overall,the proposed algorithm makes full use of feature-level and label-level information to improve classification performance.The reconstruction loss and multi-label classification loss are formulated as a whole and experiments on a public dataset have proved superiority of our proposed algorithm.

    Multi-scale salient object detection network combining an attention mechanism
    LIU Di,GUO Jichang,WANG Yudong,ZHANG Yi
    Journal of Xidian University. 2022, 49(4):  118-126.  doi:10.19665/j.issn1001-2400.2022.04.014
    Abstract ( 622 )   HTML ( 25 )   PDF (2670KB) ( 198 )   Save
    Figures and Tables | References | Related Articles | Metrics

    At present,most salient object detection algorithms are disturbed by the complex background of the image,and the detection results show the phenomena of uneven brightness and blurred edges.To address the above issues,a salient object detection network combining attention mechanism and multi-scale feature fusion is proposed.First,the network is based on the encoder-decoder architecture and the features from adjacent layers are connected in the encoding and decoding process,which captures the multi-scale salient objects in the image.Second,the attention mechanism is integrated in the network to focus on the spatial information and channel information of features,with the purpose of obtaining uniform and complete salient object detection results with clear edges.Finally,a parallel multi-branch structure,named Context Feature Extraction Module,is used to extract features under different receptive fields to improve the performance of salient object detection.Experimental results show that the proposed method can not only accurately locate and highlight the salient objects,but also accurately predict the edge of the salient object in the complex background.Compared with the contrast methods,the average absolute error of MAE and F-Measure on the salient object detection dataset ECSSD can be improved by at least 10% and 0.7%,respectively.

    Micro-expression recognition based on two-channel decision information fusion
    RONG Ruyi,XUE Peiyun,BAI Jing,JIA Hairong,XIE Yali
    Journal of Xidian University. 2022, 49(4):  127-133.  doi:10.19665/j.issn1001-2400.2022.04.015
    Abstract ( 377 )   HTML ( 76 )   PDF (1560KB) ( 97 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Micro-expression,as an important channel to reveal underlying emotion,is a kind of unconscious nonverbal facial information,not controlled by the brain,can reflect people's real psychological experience and psychological state,but micro-expression movements appear small and quick,and cannot be easily captured,making it difficult for a single mode of micro expression recognition accuracy to ascend.To solve the above problems,this paper proposes an algorithm for extracting the facial color features of micro-expressions,and integrates the extracted features with the texture features of micro-expressions for decision fusion,so as to construct the bi-modal emotion recognition model of micro-expressions.First,the model extracts the corresponding texture features from the preprocessed micro-expression data by the uniform LBP-TOP algorithm.Second,the Lab color difference between each pixel of two frames of micro-expression sequence images is calculated to obtain the facial color features,and the embedded feature selection is carried out to eliminate redundant features.Then,the classifiers of the two modes are trained respectively,and the classification information obtained from the two modes is fused for decision making.Finally,the classification results of micro-expressions are obtained.The model was tested on CAMSE Ⅱ and SMIC micro-expression dataset.Experimental results show that the average recognition accuracies of the micro-expression single mode of texture and face color are 64.73% and 51.64%,and 63.58% and 50.48%,while the results of micro-expression emotion recognition after decision fusion are 68.11% and 66.43%.The recognition accuracy is higher than that before the fusion,which indicates that the proposed bimodal emotion recognition model can significantly improve the recognition ability of micro expressions.

    Many-objective evolutionary algorithm based on the multitasking mechanism
    LIU Tianyu,CAO Lei
    Journal of Xidian University. 2022, 49(4):  134-143.  doi:10.19665/j.issn1001-2400.2022.04.016
    Abstract ( 331 )   HTML ( 13 )   PDF (960KB) ( 94 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The searching ability of traditional evolutionary algorithms decrease srapidly because of the reduced selection pressure in dealing with many-objective optimization problems.Therefore,a many-objective evolutionary algorithm based on a multitasking mechanism is proposed.To increase the selection pressure in optimization processes,an adaptive objective reduction strategy is adopted to construct the low-dimensional task,which is related to the traditional many-objective optimization task.In the construction of low-dimensional tasks,the appropriate dimension reduction technique is chosen according to the evaluation of the current objective subset adaptively.After that,the constructed low-dimensional task and the original many-objective task are optimized simultaneously according to the multitasking mechanism.In this paper,an inter-task interaction strategy is adopted to allocate tasks to individuals and update the individual population,so as to improve the searching ability and avoid information loss because of dimension reduction.Moreover,a differential mutation operator is implemented on the individuals which remain unchanged for several generations from the repository population to avoid converging prematurely.In the experimental part,the proposed algorithm is tested on five groups of benchmark functions with several state-of -the-art many-objective evolutionary algorithms.Statistical results demonstrate the effectiveness of the proposed algorithm in solving many-objective optimization problems.

    Method to recognize human action by using the convolutional block attention mechanism
    GAO Deyong,KANG Zibing,WANG Song,WANG Yangping
    Journal of Xidian University. 2022, 49(4):  144-155.  doi:10.19665/j.issn1001-2400.2022.04.017
    Abstract ( 301 )   HTML ( 86 )   PDF (5373KB) ( 119 )   Save
    Figures and Tables | References | Related Articles | Metrics

    When focusing on the region of interest in the image sequence in the action recognition task,the attention mechanism focuses more on the correlation of features at the channel level and ignores the spatial location information on the features,so it lacks the ability to accurately identify dynamic regions in the video.Therefore,this paper proposes an action recognition algorithm based on the attention mechanism and convolutional LSTM.First,the ResNet-50 network is used to obtain the feature representation of the video frame,and the convolution block attention module is used to first allocate the resources of the feature map on different convolution channels through channel attention,and then the different feature maps are analyzed with spatial attention.In this way,the optimal adjustment of the weights of the convolutional feature map is realized,and the influence of the regions unrelated to the action is suppressed or reduced.At the same time,considering that the long-short-term memory network (LSTM) loses the spatial structure information of the image frame when processing spatiotemporal data,the convolutional long-short-term memory network (ConvLSTM) uses the convolution operation to mine the spatial correlation in the image.The completeness representation of video’s attribute is further supplemented.The ConvLSTM is used to model the sequence information of the features to obtain frame-level predictions.Finally,the predictions of all frames are combined to determine the video classification.Experimental results on three public datasets show that the method proposed in this paper can effectively highlight the key region in the video and improve the accuracy of action recognition to a certain extent.

    Analysis of resilience of the high-speed rail temporal network under disaster temporal and spatial attributes
    WANG Qiuling,KE Yuhao,GAO Yimin,CHENG Cheng,WANG Yuhang,LIN Jixiang
    Journal of Xidian University. 2022, 49(4):  156-166.  doi:10.19665/j.issn1001-2400.2022.04.018
    Abstract ( 338 )   HTML ( 17 )   PDF (1477KB) ( 78 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Evaluating the resilience of the high-speed rail temporal network is of great significance for analyzing the high-speed rail system structure and dispatching the operation of high-speed rail trains.This paper aims at the temporal and spatial attributes of disaster factors that are not well considered in the existing resilience studies of complex transportation networks,in which a new model of the analysis of the resilience of the high-speed rail temporal network is proposed based on the differences between nodes in the time and space dimensions of the network being attacked by different disasters.First,high-speed rail temporal network model is built based on the time sequence of the high-speed rail passing through stations and the spatial relationship between the stations.The resilience index is introduced to evaluate the resilience of the high-speed rail temporal and spatial network under normal conditions.Then,according to the entropy weight method,the weight of different disasters is determined,and an evaluation model for the resilience of the high-speed railway temporal network under multiple disasters is constructed,the resilience of high-speed rail temporal networks is evaluated under disaster situations through resilience indicators.The simulation experiment selects the operation data of China's high-speed rail and the historical data of the past ten years of disasters such as rainstorms and earthquakes,and compares the resilience evaluation model in this paper with two commonly used complex network resilience evaluation models.Experimental results show that compared with the traditional resilience assessment model,the final resilience loss ratio of the resilience assessment model in this paper is reduced by 0.61%,which is more in line with the process of the actual high-speed rail network affected by disasters,indicating that the model have a good theoretical value and practicality.

    Electronic Science and Technology & Others
    Design of random pre-obfuscation logic units against EM side-channel attack
    ZHAO Yiqiang,CAO Yuwen,HE Jiaji,Ma Haocheng,LIU Yanjiang,YE Mao
    Journal of Xidian University. 2022, 49(4):  167-175.  doi:10.19665/j.issn1001-2400.2022.04.019
    Abstract ( 295 )   HTML ( 20 )   PDF (1699KB) ( 98 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Due to the programmable features,FPGAs have been prevalent in security applications using cryptographic algorithms.Recently,electromagnetic side-channel analysis attacks have become a major threat to these hardware implementations.On the basis of the hardware architecture of field programmable gate arrays,we propose an electromagnetic side-channel countermeasure based on random pre-obfuscation logic units.These logic units are implemented using the look-up table architecture and inserted with elaborated timing adjustments,on the basis of which the initial state of the combinatorial logic and the state transition of the sequential logic are hidden,so as to reduce the correlation between electromagnetic radiation and the key.After applying the countermeasure to the Advanced Encryption Standard circuit,experimental results show that the number of electromagnetic curves required to crack the key increases from 94 to more than 100 000.This means that the electromagnetic side-channel security is improved by 1 000 times at least.In terms of overhead,the resource and power increase only by 1.1% and 1.47% respectively,with no additional performance overhead introduced.

    Design of a hardware compression encoder with a high throughput
    WU Changkun,ZHANG Wei,HAO Yazhe
    Journal of Xidian University. 2022, 49(4):  176-183.  doi:10.19665/j.issn1001-2400.2022.04.020
    Abstract ( 301 )   HTML ( 11 )   PDF (859KB) ( 96 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the pressure of storage and transmission bandwidth massive image data in practical applications,a high throughput image compression encoder compatible with processing 8~16 bits grayscale images is designed,and the corresponding VLSI architecture is given.By analyzing the characteristics of the 16-bit gray image pixel value,and the difference between Discrete Wavelet Transform(DWT) and Optimized Truncated Embedded Module(EBCOT) with the hardware implementation of the JPEG2000 compression encoder in the processing time,the optimal parallel-series-parallel structure is proposed,and the architecture is designed with high 8-bit and low 8-bit processing,which can be processed independently or together.This architecture increases the flexibility of the encoder and greatly improves the throughput of the compression encoder.Finally,the encoder is implemented on Xilinx XC7K480T.The highest operating frequency of the encoder is 147.734 MHz,and the maximum throughput rate of the 8-bit grayscale image is 169.55 MB/s.The maximum throughput of the 16-bit grayscale image is 266.87 MB/s.Compared with existing similar encoders,the throughput is increased by more than 40%.In practical engineering applications,the encoder has not only a high reliability and good flexibility,but also a strong scalability.By controlling the parallelism,it can realize high magnification,high quality and fast compression of images with different resolutions,which is of important practical value.

    PETA-Gmin:dynamic continuation algorithm for solving nonlinear circuits
    JIN Zhou,LIU Yi,PEI Haojie,FENG Tian,DUAN Yiru,ZHOU Zhenya
    Journal of Xidian University. 2022, 49(4):  184-192.  doi:10.19665/j.issn1001-2400.2022.04.021
    Abstract ( 391 )   HTML ( 13 )   PDF (915KB) ( 95 )   Save
    Figures and Tables | References | Related Articles | Metrics

    In the transistor-level circuit simulation of integrated circuit design,in order to successfully solve the DC operating point,various DC continuation algorithms were born,such as Gmin stepping algorithm,source stepping algorithm,pseudo-transient algorithm,homotopy algorithm,etc.With the continuous development of integrated circuits,the convergence performance and efficiency of DC analysis algorithms are still a huge challenge for today's large-scale circuits with strong nonlinearity.In order to solve the problems of poor convergence and low simulation efficiency in the simulation process of large-scale circuits with strong nonlinearities,a hybrid multi-phase continuation method named PsEudo-TrAnsientGmin stepping (PETA-Gmin) is proposed to improve the convergence while maintaining a high efficiency.Different from all conventional methods,the PETA-Gmin successfully integrates the pseudo-transient process into the Gmin stepping method.The pseudo-capacitor allows the node voltage to change continuously,thereby effectively improving the problem of discontinuity of the solution curve in the simulation,and improving the convergence.At the same time,the fast stepping of Gmin ensures an excellent simulation efficiency.The proposed PETA-Gmin approach is verified by industrial-level large-scale post-layout circuits.Experimental results demonstrate that the convergence is greatly improved compared with the Gmin stepping method,and that the approach achieves up to 4.89X (2.57X on average) speedup compared with the state-of-the-art pseudo transient analysis method.

    Design and analysis of a new hoop tensegrity structure for space deployable antennas
    TANG Yaqiong,LI Tuanjie,CHEN Congcong
    Journal of Xidian University. 2022, 49(4):  193-200.  doi:10.19665/j.issn1001-2400.2022.04.022
    Abstract ( 336 )   HTML ( 11 )   PDF (1778KB) ( 84 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The space deployable antenna is a key part for the space-based VLBI,global rainfall radar system,space solar power station and large space military infrastructure.In order to adapt the launch limits of the rocket,the cable-net structure becomes an optimum design scheme for large space deployable structures.As the antenna size increases,slender rods are made into a truss structure to provide boundary support for the cable net.Because of the low flexural rigidity of the slender rods,the truss structure is always distorted because of the cable force,and the design accuracy of the structure is ultimately reduced.To solve this problem,this paper proposes a new hoop tensegrity structure for space deployable antennas,of which the rods only bear pressures and the cables only bear tensions.For this structure,block force balance equations are established for distributed calculation of the supporting truss and the cable net.Then,a tension design method and a tension-shape coupling design method are proposed for the form finding of the hoop tensegrity structure.Finally,a simulation example is put forward to verify the proposed structure and method.It is illustrated that the structure has a superior surface accuracy even when the elastic deformation is considered.

    Correspondence calculation of 3D shapes by mixed supervision learning
    YANG Jun,LI Jintai
    Journal of Xidian University. 2022, 49(4):  201-212.  doi:10.19665/j.issn1001-2400.2022.04.023
    Abstract ( 288 )   HTML ( 14 )   PDF (2863KB) ( 52 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The focus of this paper is on the accuracy of existing shape correspondence methods which is easily degenerated by the topological changes of the near-isometric non-rigid 3D shapes.The novel approach we propose in this paper is based on a Mixed Supervision Deep Functional Maps Network (MSDFMNet).First,in the weakly supervised feature extraction module,the 3D point clouds representation forms of both source and target shapes are approximately rigidly aligned through weakly supervised learning,and then the features are learned directly from raw shape geometry,which leads to more discriminative features while solving the problem that the symmetry of the shape itself affects the accuracy of the correspondence.Second,by turning the extracted features into their corresponding spectral feature descriptors in the unsupervised functional map module,we build the matrix of functional maps.We then apply the weighted regularization to obtain the optimal functional maps matrix.In solving the problem of insufficient constraint of the functional maps matrix,the need for labeled data and the labor cost of the algorithm is also reduced.Finally,the optimal functional maps matrix is refined in the post-processing module by using the ZoomOut algorithm to recover the precise point-to-point mappings.Experimental results have shown that the geodesic errors of the 3D shape correspondence constructed by this algorithm on the FAUST,SCAPE and SURREAL dataset are smaller than those of the current commonly used methods,and that the correspondence results are more accurate,and texture mapping results are smoother.Our algorithm has a good generalization ability.