Journal of Xidian University 2024 Vol.51
Please wait a minute...
For Selected: Toggle Thumbnails
Spectrum compression based autofocus algorithm for the TOPS BP image
ZHOU Shengwei, LI Ning, XING Mengdao
Journal of Xidian University    2024, 51 (1): 1-10.   DOI: 10.19665/j.issn1001-2400.20230102
Abstract293)   HTML36)    PDF(pc) (4183KB)(298)       Save

In the high squint TOPS mode SAR imaging of the maneuvering platform,by using the BP imaging algorithm in the rectangular coordinate system of the ground plane,the wide swath SAR image without distortion in the ground plane can be obtained in a short time.However,how to quickly complete the motion error compensation and side lobe suppression of the BP image is still difficult in practical application.This paper proposes an improved spectral compression method,which can quickly realize the follow-up operations such as autofocus of the BP image of the ground plane in the high squint TOPS mode of the mobile platform.First,by considering that the traditional BP spectral compression method is only applicable to the spotlight imaging mode,combined with the virtual rotation center theory of high-squint TOPS SAR and the wavenumber spectrum analysis,an improved exact spectral compression function is derived,which can give rise to the unambiguous ground plane TOPS mode BP image spectrum through full-aperture compression,on the basis of which the phase gradient autofocus(PGA) can be used to quickly complete the full aperture motion error estimation and compensation.In addition,based on the unambiguous aligned BP image spectrum obtained by the improved spectral compression method proposed in this paper,the image sidelobe suppression can be realized by uniformly windowing in the azimuth frequency domain.Finally,the effectiveness of the proposed algorithm is verified by simulation data processing.

Table and Figures | Reference | Related Articles | Metrics
Double windows sliding decoding of spatially-coupled quantum LDPC codes
WANG Yunjiang, ZHU Gaohui, YANG Yuting, MA Zhong, WEI Lu
Journal of Xidian University    2024, 51 (1): 11-20.   DOI: 10.19665/j.issn1001-2400.20230301
Abstract133)   HTML13)    PDF(pc) (1669KB)(102)       Save

Quantum error-correcting codes are the key way to address the issue caused by the inevitable noise along with the quantum computing process.Spatially coupled quantum LDPC codes,as their classical counterparts,can achieve a good balance between the error-correcting capacity and the decoding delay in principle.By considering the problems of high complexity and long decoding delay caused by the standard belief propagation algorithm(BPA) for decoding the spatially coupled quantum LDPC codes(SC-QLDPCs),a quantum version of the sliding decoding scheme,named the double window sliding decoding algorithm is proposed in this paper.The proposed algorithm is inspired by the idea of classical sliding window decoding strategies and by exploiting the non-zero diagonal bands on the principal and sub-diagonals structure of the corresponding two parity-check matrices(PCMs) of the concerned SC-QLDPC.The phase and bit flipping error syndromes of the received codeword are obtained by sliding the two windows along the principal and sub-diagonals of the two classical PCMs simultaneously,which enables a good trade-off between complexity and decoding delay to be obtained by using the proposed strategy,with numerical results given to verify the performance of the proposed double window sliding decoding scheme.Simulation results show that the proposed algorithm can not only offer a low latency decoding output but also provide a decoding performance approaching that of the standard BPA when enlarging the window size,thus improving the application scenarios of the SC-QLDPC significantly.

Table and Figures | Reference | Related Articles | Metrics
Electromagnetic calculation of radio wave propagation in electrically large mountainous terrain environment
WANG Nan, LIU Junzhi, CHEN Guiqi, ZHAO Yanan, ZHANG Yu
Journal of Xidian University    2024, 51 (1): 21-28.   DOI: 10.19665/j.issn1001-2400.20230210
Abstract120)   HTML6)    PDF(pc) (1446KB)(81)       Save

In emerging industries such as unmanned aerial vehicles and drones,the signal coverage requirements are high,not only in the city,but in the inaccessible mountains,deserts,and forests also wireless signal coverage is needed to truly complete remote control.These areas need to consider the impact of terrain changes on electromagnetic transmission.The Uniform Geometrical Theory of Diffraction method in Computational Electromagnetic is an effective method to analyze electromagnetic problems in electrically large environments and this paper uses the method of computational electromagnetics to study the propagation of electromagnetic waves in mountainous environments.A new method of constructing an irregular terrain model is presented.The available terrain data can be generated by the cubic surface algorithm,and the irregular terrain is spliced by multiple cubic surfaces.The accuracy of the model data is verified by the mean root mean square error.Based on the topographic data,a parallel 3D geometric optical algorithm is completed,and the distribution of the regional electromagnetic field is simulated.The actual mountain terrain environment is selected for field measurement,and the comparison trend between the measurement results and the simulation results is consistent,which verifies the effectiveness of the method in the analysis of electromagnetic wave propagation in the irregular terrain.Considering the scale of environmental electromagnetic computation,a parallel strategy is established,and the parallel efficiency of 100 cores test can be kept to be above 80%.

Table and Figures | Reference | Related Articles | Metrics
Time-varying channel prediction algorithm based on the attention denoising and complex LSTM network
CHENG Yong, JIANG Fengyuan
Journal of Xidian University    2024, 51 (1): 29-40.   DOI: 10.19665/j.issn1001-2400.20230203
Abstract134)   HTML15)    PDF(pc) (1707KB)(100)       Save

With the development of wireless communication technology,the research on communication technology in high-speed scenario is becoming more and more extensive,one aspect of which is that obtaining accurate channel state information is of great significance to improving the performance of a wireless communication system.In order to solve the problem that the existing channel prediction algorithms for orthogonal Frequency Division multiplexing(OFDM) systems do not consider the influence of noise and the low prediction accuracy in high-speed scenarios,a time-varying channel prediction algorithm based on attention denoising and complex convolution LSTM is proposed.First,a channel attention channel denoising network is proposed to denoise the channel state information,which reduces the influence of noise on the channel state information.Second,a channel prediction model based on the complex convolutional layer and long short term memory(LSTM) is constructed.The channel state information at the historical moment after denoising is extracted,and then it is input into the channel prediction model to predict the channel state information at the future moment.The improved LSTM prediction model enhances the ability to extract channel timing features and improves the accuracy of channel prediction.Finally,the Adam optimizer is used to predict the channel state information at the future time.Simulation results show that the proposed time-varying channel prediction algorithm based on the attention denoising and complex convolutional LSTM network method has a higher prediction accuracy for the channel state information than the comparison algorithm.At the same time,the proposed method can be applied to the time-varying channel prediction in high-speed moving scenarios.

Table and Figures | Reference | Related Articles | Metrics
Attention autocorrelation mechanism-based residual clutter suppression method
SHEN Lu, SU Hongtao, WANG Jin, MAO Zhi, JING Xinchen, LI Ze
Journal of Xidian University    2024, 51 (1): 41-51.   DOI: 10.19665/j.issn1001-2400.20230402
Abstract105)   HTML31)    PDF(pc) (2129KB)(99)       Save

Radar systems are subject to an ever-changing and complex environment that creates a non-uniform and time-varying clutter.The unsuppressed residual clutter can produce a significant number of false alarms,leading to a degraded target tracking performance,spurious trajectories creation,or saturation data processing systems,which in turn decreases the detection ability of the radar system.Conventional residual clutter suppression algorithms typically require feature extraction and classifier construction.These steps can result in poor generalization capability,difficulty in feature combination,and high requirements for the classifier.To address these issues,inspired by self-attention mechanisms and domain knowledge,this paper proposes a data- and knowledge-driven attention autocorrelation mechanism,which can effectively extract deep features of the radar echo to distinguish between targets and clutter,on the basis of which a residual clutter suppression method is constructed using the attention autocorrelation mechanism,which makes full use of the radar echo feature,thereby improving the residual clutter suppression capability.Simulation and measurement results demonstrate that this method has advantages of a significant performance and generalization capability for residual clutter suppression.Additionally,its parallel computing structure enhances the operational efficiency of the algorithm.

Table and Figures | Reference | Related Articles | Metrics
Improved double deep Q network algorithm for service function chain deployment
LIU Daohua, WEI Dinger, XUAN Hejun, YU Changming, KOU Libo
Journal of Xidian University    2024, 51 (1): 52-59.   DOI: 10.19665/j.issn1001-2400.20230310
Abstract89)   HTML4)    PDF(pc) (869KB)(57)       Save

Network Function Virtualization(NFV) has become the key technology of next generation communication.Virtual Network Function Service Chain(VNF-SC) mapping is the key issue of the NFV.To reduce the energy consumption of the communication network server and improve the quality of service,a Function Chain(SFC) deployment algorithm based on an improved Double Deep Q Network(DDQN) is proposed to reduce the energy consumption of network servers and improve the network quality of service.Due to the dynamic change of the network state,the service function chain deployment problem is modeled as a Markov Decision Process(MDP).Based on the network state and action rewards,the DDQN is trained online to obtain the optimal deployment strategy for the service function chain.To solve the problem that traditional deep reinforcement learning draws experience samples uniformly from the experience replay pool leading to low learning efficiency of the neural network,a prioritized experience replay method based on importance sampling is designed to draw experience samples so as to avoid high correlation between training samples to improve the learning efficiency of the neural network.Experimental results show that the proposed SFC deployment algorithm based on the improved DDQN can increase the reward value,and that compared with the traditional DDQN algorithm,it can reduce the energy consumption and blocking rate by 19.89%~36.99% and 9.52%~16.37%,respectively.

Table and Figures | Reference | Related Articles | Metrics
Research on the intent-driven network service resilience mechanism
LI Pengcheng, SONG Yanbo, YANG Chungang, LI Fuqiang
Journal of Xidian University    2024, 51 (1): 60-71.   DOI: 10.19665/j.issn1001-2400.20230311
Abstract96)   HTML4)    PDF(pc) (2211KB)(64)       Save

The emergence of new technologies such as Software-Defined Network,Network Function Virtualization,and Intent-Driven Network have driven the development of networks towards service-oriented,customized,and intelligent directions.However,the large and complex network infrastructure has led to network management failures and frequent security attacks,making it crucial to improve network service resilience and achieve continuous network service assurance.The Intent-Driven Network can automate the entire process of generating and deploying network resilience strategies from user intent.This provides networks with more flexible means to effectively address a wide array of challenges,greatly improving the network management efficiency and enhancing network service resilience,on the basis of which the paper proposes an intent-driven network service resilience control loop architecture and its implementation architecture.By introducing the Belief-Desire-Intention(BDI) reasoning logic into the resilience reasoning mechanism,the network is endowed with preventive,defensive,restorative,and adaptive functionalities,enabling networks to respond promptly in the early stages of network attacks,adjusting resilience strategies flexibly based on specific contexts,countering sudden network assaults,and sustaining network service assurance.Finally,the proposed intent-driven network service resilience mechanism is validated for its effectiveness in ensuring network service resilience using Distributed Denial of Service(DDoS) attacks as a use case.

Table and Figures | Reference | Related Articles | Metrics
Research on aviation ad hoc network routing protocols in highly dynamic and complex scenarios
JIANG Laiwei, CHEN Zheng, YANG Hongyu
Journal of Xidian University    2024, 51 (1): 72-85.   DOI: 10.19665/j.issn1001-2400.20230313
Abstract112)   HTML6)    PDF(pc) (1772KB)(65)       Save

With the rapid enlargement of the air transportation scale,the aviation ad hoc network(AANET) communication based on the civil aviation aircraft has possessed the capacities of communication network coverage.To find an effective means of important data transmission of aircraft nodes in highly dynamic and uncertain complex scenarios and backup them safely has become more important for improving the reliability and management abilities of the air-space-ground integrated network.However,the characteristics of the AANET,such as high dynamic change of network topology,large network span,and unstable network links,have brought severe challenges to the design of AANET protocols,especially the routing protocols.In order to facilitate the future research on the design of AANET routing protocols,this paper comprehensively analyzes the relevant requirements of AANET routing protocol design and investigates the existing routing protocols.First,according to characteristics of the AANET,this paper analyzes the factors,challenges,and design principles that need to be considered in the design of the routing protocols.Then,according to the design characteristics of existing routing protocols,this paper classifies and analyzes the existing routing protocols of the AANET.Finally,the future research focus of the routing protocols for the AANET is analyzed,so as to provide reference for promoting the research on the next generation of the air-space-ground integrated network in China.

Table and Figures | Reference | Related Articles | Metrics
Graph convolution neural network for recommendation using graph negative sampling
HUANG Heyuan, MU Caihong, FANG Yunfei, LIU Yi
Journal of Xidian University    2024, 51 (1): 86-99.   DOI: 10.19665/j.issn1001-2400.20230214
Abstract100)   HTML4)    PDF(pc) (1434KB)(79)       Save

After several years of rapid development,the collaborative filtering algorithms based on graph convolutional neural networks have achieved the most advanced performance in many recommender system scenarios.However,most of these algorithms only use simple random negative sampling method when collecting negative samples,and do not make full use of graph structure information.To solve this problem,a graph convolution neural network for recommendation using graph negative sampling(GCN-GNS) is proposed.The algorithm first constructs a user-item bipartite graph and uses a graph convolution neural network to obtain the node embedding vector.Next,the depth-first random walk method is used to obtain the sequence of the wandering item nodes that includes both the neighboring item nodes and the distant item nodes.Then the attention layer is designed to learn the weights of different nodes in the walk sequence adaptively and a dynamically updated virtual negative sample is formed according to the weights.Finally,the virtual negative sample is used to train the model more efficiently.Experimental results show that the GCN-GNS performs better than other algorithms for comparison on three real public datasets in most cases,which indicates that the proposed novel graph negative sampling method can help the GCN-GNS model to make better use of the graph structure information,and ultimately improves the effect of item recommendation.

Table and Figures | Reference | Related Articles | Metrics
Subspace clustering algorithm optimized by non-negative Lagrangian relaxation
ZHU Dongxia, JIA Hongjie, HUANG Longxia
Journal of Xidian University    2024, 51 (1): 100-113.   DOI: 10.19665/j.issn1001-2400.20230204
Abstract78)   HTML5)    PDF(pc) (2121KB)(62)       Save

Spectral relaxation is widely used in traditional subspace clustering and spectral clustering.First,the eigenvector of the Laplacian matrix is calculated.The eigenvector contains negative numbers,and the result of the 2-way clustering can be obtained directly according to the positive and negative of the elements.For multi-way clustering problems,2-way graph partition is applied recursively or the k-means is used in eigenvector space.The assignment of the cluster label is indirect.The instability of clustering results will increase by this post-processing clustering method.For the limitation of spectral relaxation,a subspace clustering algorithm optimized by non-negative Lagrangian relaxation is proposed,which integrates self-representation learning and rank constraints in the objective function.The similarity matrix and membership matrix are solved by non-negative Lagrangian relaxation and the nonnegativity of the membership matrix is maintained.In this way,the membership matrix becomes the cluster posterior probability.When the algorithm converges,the clustering results can be obtained directly by assigning the data object to the cluster with the largest posterior probability.Compared with the existing subspace clustering and spectral clustering methods,the proposed algorithm designs a new optimization rule,which can realize the direct allocation of cluster labels without additional clustering steps.Finally,the convergence of the proposed algorithm is analyzed theoretically.Generous experiments on five benchmark clustering datasets show that the clustering performance of the proposed method is better than that of the recent subspace clustering methods.

Table and Figures | Reference | Related Articles | Metrics
Three-dimensional attention-enhanced algorithm for violence scene detection
DING Xinmiao, WANG Jiaxing, GUO Wen
Journal of Xidian University    2024, 51 (1): 114-124.   DOI: 10.19665/j.issn1001-2400.20230206
Abstract66)   HTML3)    PDF(pc) (2162KB)(62)       Save

In order to improve the ability of multimedia to analyze the security on Web and effectively filter the objectionable content,a violent video scene detection algorithm based on three-dimensional attention is proposed.Taking the 3D DenseNet as the backbone network,the algorithm first uses the P3D to extract low-level spatial-temporal feature information.Second,the SimAM attention module is introduced to calculate channel-spatial attention so as to enhance the feature of the key area in the video frame.Then,a transition layer with temporal attention is designed to highlight the feature of key frames in the video.In this way,the channel-spatial-temporal attention is formed to better detect violent scenes.In the experiments on violence detection,the accuracy reaches 98.75% and 100% on Hockey and Movies,which are small data sets with a single content,and 89.25% on RWF-2000,which is a large data set with a diverse content.Results show that the proposed algorithm can effectively improve the performance of violence detection with 3D attention.In the violent content localization detection experiment on data set VSD2014,the better performance further proves the effectiveness and generalization ability of the algorithm.

Table and Figures | Reference | Related Articles | Metrics
Self-supervised contrastive representation learning for semantic segmentation
LIU Bochong, CAI Huaiyu, WANG Yi, CHEN Xiaodong
Journal of Xidian University    2024, 51 (1): 125-134.   DOI: 10.19665/j.issn1001-2400.20230304
Abstract95)   HTML3)    PDF(pc) (2895KB)(65)       Save

To improve the accuracy of the semantic segmentation models and avoid the labor and time costs of pixel-wise image annotation for large-scale semantic segmentation datasets,this paper studies the pre-training methods of self-supervised contrastive representation learning,and designs the Global-Local Cross Contrastive Learning(GLCCL) method based on the characteristics of the semantic segmentation task.This method feeds global images and a series of image patches after local chunking into the network to extract global and local visual representations respectively,and guides the network training by constructing loss function that includes global contrast,local contrast,and global-local cross contrast,enabling the network to learn both global and local visual representations as well as cross-regional semantic correlations.When using this method to pre-train BiSeNet and transfer to the semantic segmentation task,compared with the existing self-supervised contrastive representational learning and supervised pre-training methods,the performance improvement of 0.24% and 0.9% mean intersection over union(MIoU) is achieved.Experimental results show that this method can improve the segmentation results by pre-training the semantic segmentation model with unlabeled data,which has a certain practical value.

Table and Figures | Reference | Related Articles | Metrics
Real world image tampering localization combining the self-attention mechanism and convolutional neural networks
ZHONG Hao, BIAN Shan, WANG Chuntao
Journal of Xidian University    2024, 51 (1): 135-146.   DOI: 10.19665/j.issn1001-2400.20230213
Abstract73)   HTML6)    PDF(pc) (2988KB)(60)       Save

Image is an important carrier of information dissemination in the era of the mobile Internet,making malicious image tampering one of the potential cybersecurity threats.Different from the image tampering on the object scale in the natural scene,image tampering in the real world exists in forged qualification certificates,forged documentation,forged screenshots,etc.The tampered images in the real world usually involve elaborate manual tampering interventions,so their tampering features are different from those in the natural scene and are more diverse,making the localization of tampered areas in the real world more challenging.Rich dependency information is important in considering the complex and diverse tampering features in the real world.Therefore,in this paper,the convolutional neural network is used for adaptive feature extraction and the reversely connected fully self-attention module is adopted for multi-stage feature attention.Finally,the tamper area is located by merging the multi-stage attentional results.The proposed method outperforms the comparison methods in the real world image tampering localization task with the F1 metric 8.98% higher than that of the mainstream method MVSS-Net and the AUC metric 3.58% higher.Besides,the proposed method also achieves the performance of mainstream methods in the natural scene image tampering localization task,and the evidence that the natural scene tampering features are inconsistent with the real world tampering features is provided.Experimental results in two scenes show that the proposed method can effectively locate the tampered area of the tampered images,and that it is more effective in complicated real world.

Table and Figures | Reference | Related Articles | Metrics
Real-time smoke segmentation algorithm combining global and local information
ZHANG Xinyu, LIANG Yu, ZHANG Wei
Journal of Xidian University    2024, 51 (1): 147-156.   DOI: 10.19665/j.issn1001-2400.20230405
Abstract83)   HTML3)    PDF(pc) (1887KB)(72)       Save

The smoke segmentation is challenging because the smoke is irregular and translucent and the boundary is fuzzy.A dual-branch real-time smoke segmentation algorithm based on global and local information is proposed to solve this problem.In this algorithm,a lightweight Transformer branch and a convolutional neural networks branch are designed to extract the global and local features of smoke respectively,which can fully learn the long-distance pixel dependence of smoke and retain the details of smoke.It can distinguish smoke and background accurately and improve the accuracy of smoke segmentation.It can satisfy the real-time requirement of the actual smoke detection tasks.The multilayer perceptron decoder makes full use of multi-scale smoke features and further models the global context information of smoke.It can enhance the perception of multi-scale smoke,and thus improve the accuracy of smoke segmentation.The simple structure can reduce the computation of the decoder.The algorithm reaches 92.88% mean intersection over union on the self-built smoke segmentation dataset with 2.96M parameters and a speed of 56.94 frames per second.The comprehensive performance of the proposed algorithm is better than that of other smoke detection algorithms on public dataset.Experimental results show that the algorithm has a high accuracy and fast inference speed.The algorithm can meet the accuracy and real-time requirements of actual smoke detection tasks.

Table and Figures | Reference | Related Articles | Metrics
Fine-grained defense methods in federated encrypted traffic classification
ZENG Yong, GUO Xiaoya, MA Baihe, LIU Zhihong, MA Jianfeng
Journal of Xidian University    2024, 51 (1): 157-164.   DOI: 10.19665/j.issn1001-2400.20230303
Abstract63)   HTML5)    PDF(pc) (1972KB)(55)       Save

In recent years,various robust algorithms and defense schemes have been presented to prevent the harm caused by abnormal traffic to the federal encrypted traffic classification model.The existing defense methods,which improve the robustness of the global model by removing the traffic of abnormal models,are coarse-grained.Nevertheless,the coarse-grained methods can lead to issues of excessive defense and normal traffic loss.To solve the above problems,we propose a fine-grained defense method to avoid abnormal traffic according to the collaborative federated encrypted traffic classification framework.The proposed method narrows the range of the abnormal traffic by dividing the local data set of abnormal nodes,achieving fine-grained localization of abnormal nodes.According to the localization results of abnormal traffic,the method realizes the fine-grained defense by eliminating abnormal traffic during model aggregation,which avoids the excessive defense and normal traffic loss.Experimental results show that the proposed method can significantly improve the efficiency of model detection without affecting accuracy.Compared with the existing coarse-grained methods,the accuracy of the fine-grained defense method can reach 91.4%,and the detection efficiency is improved by 32.3%.

Table and Figures | Reference | Related Articles | Metrics
Medicaldata privacy protection scheme supporting controlled sharing
GUO Qing, TIAN Youliang
Journal of Xidian University    2024, 51 (1): 165-176.   DOI: 10.19665/j.issn1001-2400.20230104
Abstract106)   HTML6)    PDF(pc) (1588KB)(58)       Save

The rational use of patient medical and health data information has promoted the development of medical research institutions.Aiming at the current difficulties in sharing medical data between patients and medical research institutions,data privacy is easy to leak,and the use of medical data is uncontrollable,a medical data privacy protection scheme supporting controlled sharing is proposed.Firstly,the blockchain and proxy server are combined to design a medical data controlled sharing model that the blockchain miner nodes are distributed to construct proxy re-encryption keys,and the proxy server is used to store and convert medical data ciphertext,and proxy re-encryption technology is used to bring about the secure sharing of medical data while protecting the privacy of patients.Secondly,a dynamic adjustment mechanism of user permissions is designed that the patient and the blockchain authorization management nodes update the access permissions of medical data through the authorization list to realize the controllable sharing of medical data by patients.Finally,the security analysis shows that the proposed scheme can bring about the dynamic sharing of medical data while protecting the privacy of medical data,and can also resist collusion attacks.Performance analysis shows that this scheme has advantages in communication overhead and computing overhead,and is suitable for controlled data sharing between patients or hospitals and research institutions.

Table and Figures | Reference | Related Articles | Metrics
Contract vulnerability repair scheme supporting inline data processing
PENG Yongxiang, LIU Zhiquan, WANG Libo, WU Yongdong, MA Jianfeng, CHEN Ning
Journal of Xidian University    2024, 51 (1): 178-186.   DOI: 10.19665/j.issn1001-2400.20230208
Abstract54)   HTML3)    PDF(pc) (1004KB)(60)       Save

Smart contracts are programs deployed on the blockchain that enable distributed transactions.However,due to the financial attributes and immutable characteristics of smart contracts,they become targets of hacker attacks.Therefore,to ensure the security of contracts,it is necessary to repair vulnerable contracts.However,existing contract vulnerability repair schemes have problems such as low repair success rate and inability to handle complex contracts.To this end,a contract vulnerability repair scheme supporting inline data processing is proposed in this paper.The proposed scheme first studies and formalizes the dynamic loading mechanism of the Ethereum virtual machine,and constructs an inline data location algorithm based on memory copy instructions to parse and decompile the smart contract bytecode structure;then the smart contract bytecode is rewritten based on the trampoline mechanism,and the inline data address offset caused by rewriting is corrected,and finally the smart contract vulnerability repair is implemented.A prototype tool named SCRepair is implemented based on the proposed scheme,which is deployed on the local test network Ganache for performance testing,and compared with existing vulnerability repair tools EVMPatch and Smartshield.Experimental results show that the SCRepair improves the bytecode rewrite success rate by 26.9% when compared with the EVMPatch.Besides,the SCRepair has a better rewrite execution stability,and is less affected by the compiler version;the SCRepair can handle complex contracts better when compared with the Smartshield.

Table and Figures | Reference | Related Articles | Metrics
Deduplication scheme with data popularity for cloud storage
HE Xinfeng, YANG Qinqin
Journal of Xidian University    2024, 51 (1): 187-200.   DOI: 10.19665/j.issn1001-2400.20230205
Abstract84)   HTML6)    PDF(pc) (2040KB)(56)       Save

With the development of cloud computing,more enterprises and individuals tend to outsource their data to cloud storage providers to relieve the local storage pressure,and the cloud storage pressure is becoming an increasingly prominent issue.To improve the storage efficiency and reduce the communication cost,data deduplication technology has been widely used.There are identical data deduplication based on the hash table and similar data deduplication based on the bloom filter,but both of them rarely consider the impact of data popularity.In fact,the data outsourced to the cloud storage can be divided into popular and unpopular data according to their popularity.Popular data refer to the data which are frequently accessed,and there are numerous duplicate copies and similar data in the cloud,so high-accuracy deduplication is required.Unpopular data,which are rarely accessed,have fewer duplicate copies and similar data in the cloud,and low-accuracy deduplication can meet the demand.In order to address this problem,a novel bloom filter variant named PDBF(popularity dynamic bloom filter) is proposed,which incorporates data popularity into the bloom filter.Moreover,a PDBF-based deduplication scheme is constructed to perform different degrees of deduplication depending on how popular a datum is.Experiments demonstrate that the scheme makes an excellent tradeoff among the computational time,the memory consumption,and the deduplication efficiency.

Table and Figures | Reference | Related Articles | Metrics
Disinformation spreading control model based on key nodes bi-objective optimization
JING Junchang, ZHANG Zhiyong, BAN Aiying
Journal of Xidian University    2024, 51 (1): 201-209.   DOI: 10.19665/j.issn1001-2400.20230209
Abstract65)   HTML5)    PDF(pc) (1554KB)(57)       Save

The spread control of disinformation is a hot area of global cyberspace security governance.At present,the research on the spread control of disinformation in online social networks has not considered the actual problem of the cost incurred by the control of key nodes set.This paper proposes a disinformation spreading control model based on key nodes bi-objective optimization.First,according to the spread influence of social user nodes in the 1-hop and 2-hop areas,as well as the degree centrality of nodes,k-shell and other complex network characteristics,the bi-objective including the control effect and control cost is expressed mathematically.Second,a bit flipping mutation algorithm incorporating adaptive nonlinear strategy is designed to improve the performance of the NSGA-Ⅱ algorithm in discrete search space.The improved NSGA-Ⅱ algorithm is used to select a key nodes set of disinformation spreading,which maximizes the effect of disinformation spreading control and minimizes the control cost.Finally,the experiment is carried out on a real online social network platform,with the influence of model parameters on the control cost and control effect analyzed and discussed.Experimental results show that this model has specific and obvious advantages over the existing methods in the combination index RTCTE of control cost and control effect.This model is applicable to the lowest cost disinformation spreading control in large-scale complex social networks.

Table and Figures | Reference | Related Articles | Metrics
Improvement of the neural distinguishers of several ciphers
YANG Xiaoxue, CHEN Jie
Journal of Xidian University    2024, 51 (1): 210-222.   DOI: 10.19665/j.issn1001-2400.20230212
Abstract120)   HTML11)    PDF(pc) (1261KB)(78)       Save

In order to further study the application of the neural network in cryptanalysis,the neural network differential divider of several typical lightweight block cipher algorithms is constructed and improved by using a deep residual network and traditional differential cryptanalysis techniques.The main results are as follows.First,the neural distinguishers of 4 to 7 rounds of PRESENT,3 rounds of KLEIN,7 to 9 rounds of LBlock and 7 to 10 rounds of Simeck32/64 are constructed and analyzed respectively based on the block cipher structure.Second,based on the characteristics of SPN structure block ciphers,PRESENT and KLEIN's neural distinguishers are improved,which can improve the accuracy of about 5.12% at most.In the study of LBlock’s neural distinguisher,it is verified that this improved method is not suitable for Feistel structure block ciphers.Third,based on the characteristics of the simeck 32/64 cryptography algorithm,the neural distinguisher is improved,with the accuracy improved by 2.3%.Meanwhile,the improved method of Simeck 32/64 is combined with the polyhedral difference analysis,and the accuracy of the existing 8-round and 9-round Simeck 32/64 poly neural network difference partition is increased by 1% and 3.2%.Finally,the three types of neural distinguishers obtained in the experiment are applied to the last round key recovery attack of 11-round simeck 32/64,with the best experimental result being a 99.4% success rate with 26.6 data complexity in 1 000 attacks.

Table and Figures | Reference | Related Articles | Metrics
Research on the multi-objective algorithm of UAV cluster task allocation
GAO Weifeng, WANG Qiong, LI Hong, XIE Jin, GONG Maoguo
Journal of Xidian University    2024, 51 (2): 1-12.   DOI: 10.19665/j.issn1001-2400.20230413
Abstract420)   HTML34)    PDF(pc) (2779KB)(277)       Save

Aiming at the cooperative task allocation problem of UAV swarm in target recognition scenario,an optimization model with recognition cost and recognition benefit as the goal is established,and a multi-objective differential evolution algorithm based on decomposition is designed to solve the model.First,an elite initialization method is proposed,and the initial solution is screened to improve the quality of the solution set on the basis of ensuring the uniform distribution of the obtained nondominated solution.Second,the multi-objective differential evolution operator under integer encoding is constructed based on the model characteristics to improve the convergence speed of the algorithm.Finally,a tabul search strategy with restrictions is designed,so that the algorithm has the ability to jump out of the local optimal.The algorithm provides a set of nondominated solution sets for the solution of the problem,so that a more reasonable optimal solution can be selected according to actual needs.After obtaining the allocation scheme by the above method,the task reallocation strategy is designed based on the auction algorithm,and the allocation scheme is further adjusted to cope with the unexpected situation of UAV damage.On the one hand,simulation experiments verify the effectiveness of the proposed algorithm in solving small,medium and large-scale task allocation problems,and on the other hand,compared with other algorithms,the nondominated set obtained by the proposed algorithm has a higher quality,which can consume less recognition cost and obtain higher recognition revenue,indicating that the proposed algorithm has certain advantages.

Table and Figures | Reference | Related Articles | Metrics
Workflow deployment method based on graph segmentation with communication and computation jointly optimized
MA Yinghong, LIN Liwan, JIAO Yi, LI Qinyao
Journal of Xidian University    2024, 51 (2): 13-27.   DOI: 10.19665/j.issn1001-2400.20231206
Abstract103)   HTML14)    PDF(pc) (3074KB)(95)       Save

For the purpose of improving computing efficiency,it becomes an important way for cloud data centers to deal with the continuous growth of computing and network tasks by decomposes complex large-scale tasks into simple tasks and modeling them into workflows,which are then completed by parallel distributed computing clusters.However,the communication bandwidth consumption caused by inter-task transmission can easily cause network congestion in data center.It is of great significance to deploy workflow scientifically,taking into account both computing efficiency and communication overhead.There are two typical types of workflow deployment algorithms:list-based workflow deployment algorithm and cluster-based workflow deployment algorithm.However,the former focuses on improving the computing efficiency while does not pay attention to the inter-task communication cost,so the deployment of large-scale workflow is easy to bring heavy network load.The latter focuses on minimizing the communication cost,but sacrifices the parallel computing efficiency of the tasks in the workflow,which results in a long workflow completion time.This work fully explores the dependency and parallelism between tasks in workflow,from the perspective of graph theory.By improving the classic graph segmentation algorithm,community discovery algorithm,the balance between minimizing communication cost and maximizing computation parallelism was achieved in the process of workflow task partitioning.Simulation results show that,under different workflow scales,the proposed algorithm reduces the communication cost by 35%~50%,compared with the typical list-based deployment algorithm,and the workflow completion time by 50%~65%,compared with the typical cluster-based deployment algorithm.Moreover,its performance has good stability for workflows with different communication-calculation ratios.

Table and Figures | Reference | Related Articles | Metrics
UAV swarm power allocation strategy for resilient topology construction
HU Jialin, REN Zhiyuan, LIU Anni, CHENG Wenchi, LIANG Xiaodong, LI Shaobo
Journal of Xidian University    2024, 51 (2): 28-45.   DOI: 10.19665/j.issn1001-2400.20230314
Abstract120)   HTML7)    PDF(pc) (5173KB)(69)       Save

A topology construction method of the Unmanned combat network with strong toughness is proposed for the problem of network performance degradation and network paralysis caused by the failure of the Unmanned combat network itself or interference by enemy attack.The method first takes the edge-connectivity as the toughness indicator of the network;second,the minimum cut is used as the measure of the toughness indicator based on the maximum flow minimum cut(Max-flow min-cut) theorem,on the basis of which considering the limited power of a single UAV and the system,the topology is constructed by means of power allocation to improve the network toughness from the physical layer perspective,and the power allocation strategy of the Unmanned combat network under power constraint is proposed;finally,particle swarm optimization(PSO) algorithm is used to solve the topology toughness optimization problem under the power constraint.Simulation results show that under the same modulation and power constraints,the power allocation scheme based on the PSO algorithm can effectively improve the toughness of the Unmanned combat network compared with other power allocation algorithms in the face of link failure mode and node failure mode,and that the average successful service arrival rate of the constructed network remains above 95% in about 66.7% of link failures,which meets the actual combat requirements.

Table and Figures | Reference | Related Articles | Metrics
Optimization of light sources for the IRS-assisted indoor VLC system considering HPSA
HE Huimeng, YANG Ting, SHI Huili, WANG Ping, BING Zhe, WANG Xing, BAI Bo
Journal of Xidian University    2024, 51 (2): 46-55.   DOI: 10.19665/j.issn1001-2400.20240103
Abstract79)   HTML9)    PDF(pc) (2522KB)(54)       Save

Aiming at the problem of unevenness of optical power distribution on the receiving plane in a visible light communication(VLC) system,a light source optimization method for an intelligent reflecting surface(IRS)-assisted indoor VLC system based on the hybrid particle swarm algorithm(HPSA) is proposed.Taking the two layout schemes of rectangular and hybrid arrangements with 16 light-emitting diodes(LEDs) as examples,the variance of received optical power on the receiving plane is set as the fitness function,and the proposed HPSA is combined with the IRS technology to optimize the half-power angle and positional layout of LEDs as well as the yaw and roll angles of IRS.Subsequently,initial(unoptimized) optimization using the HPSA,and optimization using the HPSA for the IRS-aided VLC systems are simulated and compared.The results indicate that when considering the first reflection link,compared to the original VLC system,the fluctuations of received optical power and signal-to-noise ratio of the VLC system optimized with the HPSA significantly decrease for both light source layouts;the HPSA optimized IRS-aided indoor VLC system improves the received optical power fluctuations in the rectangular layout as well as the HPSA optimized VLC system,and its performance is significantly better than that of the HPSA optimized VLC system only in the hybrid layout for optical power fluctuations improvement.Among the three VLC systems,the IRS-aided VLC system based on HPSA optimization has the largest average received optical power.Besides,the average root mean square delay spread performance of the above three VLC systems using a hybrid layout is better than that of a rectangular layout.This work will benefit the study of light source distribution in indoor VLC systems.

Table and Figures | Reference | Related Articles | Metrics
Highly dynamic multi-channel TDMA scheduling algorithm for the UAV ad hoc network in post-disaster
SUN Yanjing, LI Lin, WANG Bowen, LI Song
Journal of Xidian University    2024, 51 (2): 56-67.   DOI: 10.19665/j.issn1001-2400.20230414
Abstract103)   HTML5)    PDF(pc) (1608KB)(76)       Save

Extreme emergencies,mainly natural disasters and accidents,have posed serious challenges to the rapid reorganization of the emergency communication network and the real-time transmission of disaster information.It is urgent to build an emergency communication network with rapid response capabilities and dynamic adjustment on demand.In order to realize real-time transmission of disaster information under the extreme conditions of "three interruptions" of power failure,circuit interruption and network connection,the Flying Ad Hoc Network can be formed by many unmanned aerial vehicles to cover the network communication in the disaster-stricken area.Aiming at the channel collision problem caused by unreasonable scheduling of FANET communication resources under the limited conditions of complex environment after disasters,this paper proposes a multi-channel time devision multiple access(TDMA) scheduling algorithm based on adaptive Q-learning.According to the link interference relationship between UAVs,the vertex interference graph is established,and combined with the graph coloring theory,and the multi-channel TDMA scheduling problem is abstracted into a dynamic double coloring problem in highly dynamic scenarios.Considering the high-speed mobility of the UAV,the learning factor of Q-learning is adaptively adjusted according to the change of network topology,and the trade-off optimization of the convergence speed of the algorithm and the exploration ability of the optimal solution is realized.Simulation experiments show that the proposed algorithm can realize the trade-off optimization of network communication conflict and convergence speed,and can solve the problem of resource allocation decision and fast-changing topology adaptation in post-disaster high-dynamic scenarios.

Table and Figures | Reference | Related Articles | Metrics
Drone identification based on the normalized cyclic prefix correlation spectrum
ZHANG Hanshuo, LI Tao, LI Yongzhao, WEN Zhijin
Journal of Xidian University    2024, 51 (2): 68-75.   DOI: 10.19665/j.issn1001-2400.20230704
Abstract95)   HTML4)    PDF(pc) (1621KB)(63)       Save

Radio-frequency(RF)-based drone identification technology has the advantages of long detection distance and low environmental dependence,so that it has become an indispensable approach to monitoring drones.How to identify a drone effectively at the low signal-to-noise ratio(SNR) regime is a hot topic in current research.To ensure excellent video transmission quality,drones commonly adopt orthogonal frequency division multiplexing(OFDM) modulation with cyclic prefix(CP) as the modulation of video transmission links.Based on this property,we propose a drone identification algorithm based on the convolutional neural network(CNN) and normalized CP correlation spectrum.Specifically,we first analyze the OFDM symbol durations and CP durations of drone signals,on the basis of which the normalized CP correlation spectrum is calculated.When the modulation parameters of a drone signal match the calculated normalized CP correlation spectrum,several correlation peaks will appear in the normalized CP correlation spectrum.The positions of these peaks reflect the protocol characteristics of drone signals,such as frame structure and burst rules.Finally,for identifying drones,a CNN is trained to extract these characteristics from the normalized CP correlation spectrum.In this work,a universal software radio peripheral(USRP) X310 is utilized to collect the RF signals of five drones to construct the experimental dataset.Experimental results show that the proposed algorithm performs better than spectrum-based and spectrogram-based algorithms,and it remains effective at low SNRs.

Table and Figures | Reference | Related Articles | Metrics
Study of the parallel MoM on a domestic heterogeneous DCU platform
JIA Ruipeng, LIN Zhongchao, ZUO Sheng, ZHANG Yu, YANG Meihong
Journal of Xidian University    2024, 51 (2): 76-83.   DOI: 10.19665/j.issn1001-2400.20230504
Abstract77)   HTML3)    PDF(pc) (2873KB)(46)       Save

In view of the current development trend of the domestic supercomputer CPU+DCU heterogeneous architecture,the research on the CPU+DCU massively heterogeneous parallel higher-order method of moments is carried out.First,the basic implementation strategy of DCU to accelerate the calculation of the method of moments is given.Based on the load balancing parallel strategy of the isomorphic parallel moment of methods,an efficient heterogeneous parallel programming framework of "MPI+openMP+DCU" is proposed to address the problem of mismatch between computing tasks and computing power.In addition,the fine-grained task division strategy and asynchronous communication technology are adopted to optimize the design of the pipeline for the DCU computation process,thus realizing the overlapping of computation and communication and improving the acceleration performance of the program.The accuracy of the CPU+DCU heterogeneous parallel moment of methods is verified by comparing the simulation results with those by the finite element method.The scalability analytical results based on the domestic DCU heterogeneous platform show that the implemented CPU+DCU heterogeneous co-computing program can obtain 5.5~7.0 times acceleration effect at different parallel scales,and that the parallel efficiency reaches 73.5% when scaled from 360 nodes to 3600 nodes(1,036,800 cores in total).

Table and Figures | Reference | Related Articles | Metrics
Obstacle avoidance algorithm for the mobile robot with vibration suppression
WU Tingming, WU Xianyun, DENG Liang, LI Yunsong
Journal of Xidian University    2024, 51 (2): 84-95.   DOI: 10.19665/j.issn1001-2400.20230701
Abstract58)   HTML2)    PDF(pc) (5324KB)(49)       Save

Aiming at the problem that dynamic obstacle avoidance algorithms for indoor mobile robots are prone to local dead zones,an improved vector field histogram(VFH) dynamic obstacle avoidance algorithm is proposed.First,according to traditional VFH-class algorithms,a path lenght cost and an evaluation index of trough width are introduced to the candidate evaluation function of the trough to reduce the probability of the mobile robot falling into local dead zones and to improve path smoothness.Second,in view of the problem that local obstacle avoidance algorithms are limited to local environment and oscillate back and forth near obstacles easily,an oscillation evaluation function is introduced with an oscillation evaluation curve drawn by calculating the weighted Euclidean distances from the pose of the mobile robot to the starting and ending points.Automatic peak detection and a first-order forward difference curve are employed to obtain the oscillation positions,and then the oscillation suppression is taken to make the mobile robot escape the local dead zones.Simulation results show that within 100 groups of simulation scenarios,the number of scenarios wherein the improved VFH algorithm falls into the local dead zones is reduced by 70 groups,the average number of planning iterations is decreased by 32.3 times,the average path length is reduced by 26.2%,and the average cumulative turning angle is declined by 79.6%.The algorithm can effectively reduce the cost of the local obstacle avoidance,improve the path smoothness and reduce the probability of falling into the dead zones in local special environment.

Table and Figures | Reference | Related Articles | Metrics
Research on lightweight and feature enhancement of SAR image ship targets detection
GONG Junyang, FU Weihong, FANG Houzhang
Journal of Xidian University    2024, 51 (2): 96-106.   DOI: 10.19665/j.issn1001-2400.20230407
Abstract153)   HTML10)    PDF(pc) (2728KB)(91)       Save

The accuracy of ship targets detection in sythetic aperture radar images is susceptible to the nearshore clutter.The existing detection algorithms are highly complex and difficult to deploy on embedded devices.Due to these problems a lightweight and high-precision SAR image ship target detection algorithm CA-Shuffle-YOLO(Coordinate Shuffle You Only Look Once) is proposed in this article.Based on the YOLO v5 target detection algorithm,the backbone network is improved in two aspects:lightweight and feature refinement.The lightweight module is introduced to reduce the computational complexity of the network and improve the reasoning speed,and a collaborative attention mechanism module is introduced to enhance the algorithm's ability to extract the detailed information on near-shore ship targets.In the feature fusion network,weighted feature fusion and cross-module fusion are used to enhance the ability of the model to fuse the detailed information on SAR ship targets.At the same time,the depth separable convolution is used to reduce the computational complexity and improve the real-time performance.Through the test and comparison experiments on the SSDD ship target detection dataset,the results show that the detection accuracy of CA-Shuffle-YOLO is 97.4%,the detection frame rate is 206FPS,and the required computational complexity is 6.1GFlops.Compare to the original YOLO v5,the FPS of our algorithm is 60FPS higher with the required computational complexity of our algorithm being only the 12% that of the ordinary YOLOv5.

Table and Figures | Reference | Related Articles | Metrics
Beacon-aided CPD indoor positioning method for the BeiDou pseudo-satellite
ZHANG Heng, YU Baoguo, PAN Shuguo
Journal of Xidian University    2024, 51 (2): 107-115.   DOI: 10.19665/j.issn1001-2400.20230409
Abstract66)   HTML1)    PDF(pc) (3360KB)(44)       Save

In order to solve the problem of high-precision positioning of the BeiDou pseudo-satellite signal in indoor small-scale space,how to save the cost of network construction on the basis of small-scale space to improve the timeliness and positioning accuracy of indoor positioning technology is an important link in the future.In this paper,a method of BeiDou pseudo-satellite carrier phase difference(CPD) localization assisted by indoor node beacons is proposed,which fully combines the characteristics of small-scale space in indoor environment.First,the problem of large-scale fingerprint construction is transformed into a fingerprint beacon,and the concept of indoor node beacon is proposed.The connection between small-scale space and surrounding space is realized by beacon nodes,and the construction and processing of the beacon characteristic spectrum based on the carrier-to-noise ratio(CN0) and carrier phase are analyzed.Then,based on the indoor node beacon,the process of position estimation based on CPD is presented.Finally,a location search algorithm considering the constraints of pedestrian location and velocity space is proposed based on particle swarm optimization(PSO).Experimental results in real environment show that the dynamic positioning accuracy of 30cm and the positioning accuracy of 25cm in a suspended state can be achieved by the indoor node beacon.Compared with inertial navigation,it has a more relaxed attitude condition and is suitable for high-precision positioning processing in small-scale space.The proposed algorithm has a better applicability in small-scale space.

Table and Figures | Reference | Related Articles | Metrics
Research on the construction and application of polar codes for shallow water acoustic communication
XING Lijuan, LI Zhuo, HUANG Yanbiao
Journal of Xidian University    2024, 51 (2): 116-125.   DOI: 10.19665/j.issn1001-2400.20230505
Abstract65)   HTML5)    PDF(pc) (2313KB)(44)       Save

To realize high speed and high-reliability communication in shallow water environments,the performance of polar code encoding and decoding technology in shallow water acoustic communication is studied.The Monte Carlo algorithm construction is used to complete the construction of polar codes on the time-invariant,quasi-stationary,and time-variant channel models established based on the ray acoustic theory,and the complexity and performance are compared with those of the channel polarization and channel degradation construction algorithms and the base-symmetric extended polarization weight construction algorithm.The constructed polar code is adopted as the channel coding scheme for the underwater acoustic communication system based on Orthogonal Frequency Division Multiplexing and the decoding scheme uses a Cyclic Redundancy Check-Aided Successive Cancellation List decoding algorithm.The performance of polar codes on these three channels is determined by simulation in comparison with the performance of Low-Density Parity Check codes with the same code length and code rate.Experimental results show that in these three channels and the range of the signal-to-noise ratio of interest,polar codes have a gain of about 0.5 dB ~ 1.2 dB relative to Low-Density Parity Check codes.Simulation comparison results of the three channels show that polar codes based on channel construction coding have better gain effects in harsh channel environments compared to Low-Density Parity Check codes,and that polar codes have a lower encoding and decoding complexity,which proves the competitiveness and broad application prospect of the polar code in energy and resource-limited shallow sea acoustic communication.

Table and Figures | Reference | Related Articles | Metrics
Efficient seed generation method for software fuzzing
LIU Zhenyan, ZHANG Hua, LIU Yong, YANG Libo, WANG Mengdi
Journal of Xidian University    2024, 51 (2): 126-136.   DOI: 10.19665/j.issn1001-2400.20230901
Abstract77)   HTML3)    PDF(pc) (1912KB)(47)       Save

As one of the effective ways to exploit software vulnerabilities in the current software engineering field,fuzzing plays a significant role in discovering potential software vulnerabilities.The traditional seed selection strategy in fuzzing cannot effectively generate high-quality seeds,which results in the testcases generated by mutation being unable to reach deeper paths and trigger more security vulnerabilities.To address these challenges,a seed generation method for efficient fuzzing based on the improved generative adversarial network(GAN) is proposed which can flexibly expand the type of seed generation through encoding and decoding technology and significantly improve the fuzzing performance of most applications with different input types.In experiments,the seed generation strategy adopted in this paper significantly improved the coverage and unique crashes,and effectively increased the seed generation speed.Six open-sourced programs with different highly-structured inputs were selected to demonstrate the effectiveness of our strategy.As a result,the average branch coverage increased by 2.79%,the number of paths increased by 10.35% and additional 86.92% of unique crashes were found compared to the original strategy.

Table and Figures | Reference | Related Articles | Metrics
Integration of pattern search into the grasshopper optimization algorithm and its applications
XIAO Yixin, LIU Sanyang
Journal of Xidian University    2024, 51 (2): 137-156.   DOI: 10.19665/j.issn1001-2400.20230602
Abstract66)   HTML1)    PDF(pc) (2873KB)(51)       Save

In the process of applying intelligent optimization algorithms to solve complex optimization problems,balancing exploration and exploitation is of great significance in order to obtain optimal solutions.Therefore,this paper proposes a grasshopper optimization algorithm that integrates pattern search to address the limitations of traditional grasshopper optimization algorithm,such as low convergence accuracy,weak search capability,and susceptibility to local optima in handling complex optimization problems.First,a Sine chaotic mapping is introduced to initialize the positions of individual grasshopper population,reducing the probability of individual overlap and enhancing the diversity of the population in the early iterations.Second,the pattern search method is employed to perform local search for the currently found optimal targets in the population,thereby improving the convergence speed and optimization accuracy of the algorithm.Additionally,to avoid falling into local optima in the later stages of the algorithm,a reverse learning strategy based on the imaging of convex lenses is introduced.In the experimental section,a series of ablative experiments is conducted on the improved grasshopper algorithm to validate the independent effectiveness of each strategy,including the Sine chaotic mapping,pattern search,and reverse learning.Simulation experiments are performed on two sets of test functions,with the results analyzed using the Wilcoxon rank-sum test and Friedman test.Experimental results consistently demonstrate that the fusion mode search strategy improved grasshopper algorithm exhibits significant enhancements in both convergence speed and optimization accuracy.Furthermore,the application of the improved algorithm to mobile robot path planning further validates its effectiveness.

Table and Figures | Reference | Related Articles | Metrics
Hyperspectral image denoising based on tensor decomposition and adaptive weight graph total variation
CAI Mingjiao, JIANG Junzheng, CAI Wanyuan, ZHOU Fang
Journal of Xidian University    2024, 51 (2): 157-169.   DOI: 10.19665/j.issn1001-2400.20230412
Abstract76)   HTML3)    PDF(pc) (2394KB)(51)       Save

During the acquisition process of hyperspectral images,various noises are inevitably introduced due to the influence of objective factors such as observation conditions,material properties of the imager,and transmission conditions,which severely reduces the quality of hyperspectral images and limits the accuracy of subsequent processing.Therefore,denoising of hyperspectral images is an extremely important preprocessing step.For the hyperspectral image denoising problem,a denoising algorithm,which is based on low-rank tensor decomposition and adaptive weight graph total variation regularization named LRTDGTV,is proposed in this paper.Specifically,Low-rank tensor decomposition is used to characterize the global correlation among all bands,and adaptive weight graph total variation regularization is adopted to characterize piecewise smoothness property of hyperspectral images in the spatial domain and preserve the edge information of hyperspectral images.In addition,sparse noise,including stripe noise,impulse noise and deadline noise,and Gaussian noise are characterized by l1-norm and Frobenius-norm,respectively.Thus,the denoising problem can be formulated into a constrained optimization problem involving low-rank tensor decomposition and adaptive weight graph total variation regularization,which can be solved by employing the augmented Lagrange multiplier(ALM) method.Experimental results show that the proposed hyperspectral image denoising algorithm can fully characterize the inherent structural characteristics of hyperspectral images data and has a better denoising performance than the existing algorithms.

Table and Figures | Reference | Related Articles | Metrics
Adaptivedensity peak clustering algorithm
ZHANG Qiang, ZHOU Shuisheng, ZHANG Ying
Journal of Xidian University    2024, 51 (2): 170-181.   DOI: 10.19665/j.issn1001-2400.20230604
Abstract96)   HTML4)    PDF(pc) (3821KB)(53)       Save

Density Peak Clustering(DPC) is widely used in many fields because of its simplicity and high efficiency.However,it has two disadvantages:① It is difficult to identify the real clustering center in the decision graph provided by DPC for data sets with an uneven cluster density and imbalance;② There exists a "chain effect" where a misallocation of the points with the highest density in a region will result in all points within the region pointing to the same false cluster.In view of these two deficiencies,a new concept of Natural Neighbor(NaN) is introduced,and a density peak clustering algorithm based on the natural neighbor(DPC-NaN) is proposed which uses the new natural neighborhood density to identify the noise points,selects the initial preclustering center point,and allocates the non-noise points according to the density peak method to get the preclustering.By determining the boundary points and merging radius of the preclustering,the results of the preclustering can be adaptively merged into the final clustering.The proposed algorithm eliminates the need for manual parameter presetting and alleviates the problem of "chain effect".Experimental results show that compared with the correlation clustering algorithm,the proposed algorithm can obtain better clustering results on typical data sets and perform well in image segmentation.

Table and Figures | Reference | Related Articles | Metrics
Study of EEG classification of depression by multi-scale convolution combined with the Transformer
ZHAI Fengwen, SUN Fanglin, JIN Jing
Journal of Xidian University    2024, 51 (2): 182-195.   DOI: 10.19665/j.issn1001-2400.20230211
Abstract87)   HTML8)    PDF(pc) (2907KB)(61)       Save

In the process of using the deep learning model to classify the EEG signals of depression,aiming at the problem of insufficient feature extraction in single-scale convolution and the limitation of the convolutional neural network in perceiving the global dependence of EEG signals,a multi-scale dynamic convolution network module and the gated transformer encoder module are designed respectively,which are combined with the temporal convolution network,and a hybrid network model MGTTCNet is proposed to classify the EEG signals of patients with depression and healthy controls.First,multi-scale dynamic convolution is used to capture the multi-scale time-frequency information of EEG signals from spatial and frequency domains.Second,the gated transformer encoder is used to learn global dependencies in EEG signals,which effectively enhances the ability of the network to express relevant EEG signal features using the multi-head attention mechanism.Third,the temporal convolution network is used to extract temporal features available for EEG signals.Finally,the extracted abstract features are fed into the classification module for classification.The proposed model is experimentally validated on the public data set MODMA using the Hold-out method and the 10-Fold Cross Validation method,with the classification accuracy being 98.51% and 98.53%,respectively.Compared with the baseline single-scale model EEGNet,the classification accuracy of the proposed model is increased by 1.89% and 1.93%,the F1 value is increased by 2.05% and 2.08%,and the kappa coefficient values are increased by 0.0381 and 0.0385,respectively.Meanwhile,the ablation experiments verify the effectiveness of each module designed in this paper.

Table and Figures | Reference | Related Articles | Metrics
Secure K-prototype clustering against the collusion of rational adversaries
TIAN Youliang, ZHAO Min, BI Renwan, XIONG Jinbo
Journal of Xidian University    2024, 51 (2): 196-210.   DOI: 10.19665/j.issn1001-2400.20230305
Abstract74)   HTML2)    PDF(pc) (1874KB)(51)       Save

Aiming at the problem of data privacy leakage in cloud environment and collusion between cloud servers in the process of clustering,an cooperative secure K-prototype clustering scheme(CSKC) against the adversaries of rational collusion is proposed.First,considering that homomorphic encryption does not directly support nonlinear computing,secure computing protocols are designed based on homomorphic encryption and additive secret sharing to ensure that the input data and intermediate results are in the form of additive secret share,and to achieve accurate calculation of the security comparison function.Second,according to the game equilibrium theory,a variety of efficient incentive mechanisms are designed,and the mutual condition contract and report contract are constructed to constrain cloud servers to implement secure computing protocols honestly and non-collusively.Finally,the proposed protocols and contracts are analyzed theoretically,and the performance of the CSKC scheme is verified by experiment.Experimental results show that compared with the model accuracy in plaintext environment,the model accuracy loss of the CSKC scheme is controlled within 0.22%.

Table and Figures | Reference | Related Articles | Metrics
New method for calculating the differential-linear bias of the ARX cipher
ZHANG Feng, LIU Zhengbin, ZHANG Jing, ZHANG Wenzheng
Journal of Xidian University    2024, 51 (2): 211-223.   DOI: 10.19665/j.issn1001-2400.20230404
Abstract65)   HTML5)    PDF(pc) (1106KB)(47)       Save

The ARX cipher consists of three basic operations,additions,rotations and XORs.Statistical analysis is currently used to calculate the bias of the ARX cipher differential-linear distinguishers.At CRYPTO 2022,NIU et al.gave a method for evaluating the correlation of the ARX cipher differential-linear distinguishers without using statistical analysis.They gave a 10-round differential-linear distinguisher for SPECK32/64.This paper gives the definition of differential-linear characteristics.It presents the first method for calculating the bias of differential-linear distinguishers using differential-linear characteristics based on the methods by BLONDEAU et al.and BAR-ON et al.Also,a method for searching for differential-linear characteristics based on Boolean Satisfiability Problem(SAT) automation techniques is proposed,which is a new method for calculating the bias of the ARX cipher differential-linear distinguisher without statistical analysis.As an application,the bias of the 10-round differential-linear distinguisher for SPECK32/64 given by NIU et al.is calculated with the theoretical value 2-15.00 obtained,which is very close to the experimental value 2-14.90 from the statistical analysis and better than the theoretical value 2-16.23 given by NIU et al.Also,the first theoretical value 2-8.41 for the bias of the 9-round differential-linear distinguisher for SIMON32/64 is given,which is close to the experimental value 2-7.12 obtained by statistical analysis.Experimental results fully demonstrate the effectiveness of this method.

Table and Figures | Reference | Related Articles | Metrics
Improved data sharing scheme based on conditional broadcast proxy re-encryptionn
ZHAI Sheping, LU Xianjing, HUO Yuanyuan, YANG Rui
Journal of Xidian University    2024, 51 (2): 224-238.   DOI: 10.19665/j.issn1001-2400.20230410
Abstract84)   HTML3)    PDF(pc) (2012KB)(51)       Save

Traditional conditional broadcast proxy re-encryption data sharing approaches over-rely on untrustworthy third-party proxy servers,which leads to issues of a low efficiency,data security and privacy leaks.To address the above problems,this paper proposes an information security protection scheme that combines conditional broadcast proxy re-encryption with blockchain consensus mechanisms.First,to solve the single point of failure and collusion attacks of individual proxy servers,this scheme uses blockchain nodes to take turns to act as proxy servers.At the same time,it selects high-credibility proxy servers to participate in re-encryption through the Delegated Proof of Stake(DPoS) consensus algorithm that integrates credibility mechanisms,greatly reducing the risks of the single point of failure and collusion attacks.Second,to address the high permission issue of proxy servers using re-encryption keys,this paper introduces the threshold cryptosystem concept and splits the re-encryption key into multiple fragments distributed across different proxy servers.In this way,any single proxy server is unable to decrypt data independently,thus effectively improving the security of the re-encryption process.Finally,through the analysis of the security,correctness and credibility of the scheme,it is demonstrated that this scheme can effectively solve security vulnerabilities in traditional schemes.Related simulation experimental results also prove that compared with existing data sharing schemes,this scheme has significant advantages in ensuring data security while having lower computational costs.

Table and Figures | Reference | Related Articles | Metrics
Superimposed pilots transmission for unsourced random access
HAO Mengnan, LI Ying, SONG Guanghui
Journal of Xidian University    2024, 51 (3): 1-8.   DOI: 10.19665/j.issn1001-2400.20230907
Abstract160)   HTML37)    PDF(pc) (856KB)(193)       Save

In unsourced random access,the base station(BS) only needs to recover the messages sent by each active device without identifying the device,which allows a large number of active devices to access the BS at any time without requiring a resource in advance,thereby greatly reducing the signaling overhead and transmission delay,which has attracted the attention of many researchers.Currently,many works are devoted to design random access schemes based on preamble sequences.However,these schemes have poor robustness when the number of active devices changes,and cannot make full use of channel bandwidth,resulting in poor performance when the number of active devices is large.Aiming at this problem,a superimposed pilots transmission scheme is proposed to improve the channel utilization ratio,and the performance for different active device numbers is further improved by optimal power allocation,making the system have good robustness when the number of active devices changes.In this scheme,the first Bp bits of the sent message sequence are used as the index,to select a pair of pilot sequence and interleaver.Then,using the selected interleaver,the message sequence is encoded,modulated and interleaved,and the selected pilot sequence is then superimposed on the interleaved modulated sequence to obtain the transmitted signal.For this transmission scheme,a power optimization scheme based on the minimum probability of error is proposed to obtain the optimal power allocation ratio for different active device numbers,and a two-stage detection scheme of superimposed pilots detection cancellation and multi-user detection decoding is designed.Simulation results show that the superimposed pilot transmission scheme can improve the performance of the unsourced random access scheme based on the preamble sequence by about 1.6~2.0 dB and 0.2~0.5 dB respectively,and flexibly change the number of active devices that the system carries and that it has a lower decoding complexity.

Table and Figures | Reference | Related Articles | Metrics