Office

Top Read Articles

    Published in last 1 year |  In last 2 years |  In last 3 years |  All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Research on the multi-objective algorithm of UAV cluster task allocation
    GAO Weifeng, WANG Qiong, LI Hong, XIE Jin, GONG Maoguo
    Journal of Xidian University    2024, 51 (2): 1-12.   DOI: 10.19665/j.issn1001-2400.20230413
    Abstract465)   HTML40)    PDF(pc) (2779KB)(302)       Save

    Aiming at the cooperative task allocation problem of UAV swarm in target recognition scenario,an optimization model with recognition cost and recognition benefit as the goal is established,and a multi-objective differential evolution algorithm based on decomposition is designed to solve the model.First,an elite initialization method is proposed,and the initial solution is screened to improve the quality of the solution set on the basis of ensuring the uniform distribution of the obtained nondominated solution.Second,the multi-objective differential evolution operator under integer encoding is constructed based on the model characteristics to improve the convergence speed of the algorithm.Finally,a tabul search strategy with restrictions is designed,so that the algorithm has the ability to jump out of the local optimal.The algorithm provides a set of nondominated solution sets for the solution of the problem,so that a more reasonable optimal solution can be selected according to actual needs.After obtaining the allocation scheme by the above method,the task reallocation strategy is designed based on the auction algorithm,and the allocation scheme is further adjusted to cope with the unexpected situation of UAV damage.On the one hand,simulation experiments verify the effectiveness of the proposed algorithm in solving small,medium and large-scale task allocation problems,and on the other hand,compared with other algorithms,the nondominated set obtained by the proposed algorithm has a higher quality,which can consume less recognition cost and obtain higher recognition revenue,indicating that the proposed algorithm has certain advantages.

    Table and Figures | Reference | Related Articles | Metrics
    Spectrum compression based autofocus algorithm for the TOPS BP image
    ZHOU Shengwei, LI Ning, XING Mengdao
    Journal of Xidian University    2024, 51 (1): 1-10.   DOI: 10.19665/j.issn1001-2400.20230102
    Abstract307)   HTML38)    PDF(pc) (4183KB)(303)       Save

    In the high squint TOPS mode SAR imaging of the maneuvering platform,by using the BP imaging algorithm in the rectangular coordinate system of the ground plane,the wide swath SAR image without distortion in the ground plane can be obtained in a short time.However,how to quickly complete the motion error compensation and side lobe suppression of the BP image is still difficult in practical application.This paper proposes an improved spectral compression method,which can quickly realize the follow-up operations such as autofocus of the BP image of the ground plane in the high squint TOPS mode of the mobile platform.First,by considering that the traditional BP spectral compression method is only applicable to the spotlight imaging mode,combined with the virtual rotation center theory of high-squint TOPS SAR and the wavenumber spectrum analysis,an improved exact spectral compression function is derived,which can give rise to the unambiguous ground plane TOPS mode BP image spectrum through full-aperture compression,on the basis of which the phase gradient autofocus(PGA) can be used to quickly complete the full aperture motion error estimation and compensation.In addition,based on the unambiguous aligned BP image spectrum obtained by the improved spectral compression method proposed in this paper,the image sidelobe suppression can be realized by uniformly windowing in the azimuth frequency domain.Finally,the effectiveness of the proposed algorithm is verified by simulation data processing.

    Table and Figures | Reference | Related Articles | Metrics
    Superimposed pilots transmission for unsourced random access
    HAO Mengnan, LI Ying, SONG Guanghui
    Journal of Xidian University    2024, 51 (3): 1-8.   DOI: 10.19665/j.issn1001-2400.20230907
    Abstract265)   HTML56)    PDF(pc) (856KB)(285)       Save

    In unsourced random access,the base station(BS) only needs to recover the messages sent by each active device without identifying the device,which allows a large number of active devices to access the BS at any time without requiring a resource in advance,thereby greatly reducing the signaling overhead and transmission delay,which has attracted the attention of many researchers.Currently,many works are devoted to design random access schemes based on preamble sequences.However,these schemes have poor robustness when the number of active devices changes,and cannot make full use of channel bandwidth,resulting in poor performance when the number of active devices is large.Aiming at this problem,a superimposed pilots transmission scheme is proposed to improve the channel utilization ratio,and the performance for different active device numbers is further improved by optimal power allocation,making the system have good robustness when the number of active devices changes.In this scheme,the first Bp bits of the sent message sequence are used as the index,to select a pair of pilot sequence and interleaver.Then,using the selected interleaver,the message sequence is encoded,modulated and interleaved,and the selected pilot sequence is then superimposed on the interleaved modulated sequence to obtain the transmitted signal.For this transmission scheme,a power optimization scheme based on the minimum probability of error is proposed to obtain the optimal power allocation ratio for different active device numbers,and a two-stage detection scheme of superimposed pilots detection cancellation and multi-user detection decoding is designed.Simulation results show that the superimposed pilot transmission scheme can improve the performance of the unsourced random access scheme based on the preamble sequence by about 1.6~2.0 dB and 0.2~0.5 dB respectively,and flexibly change the number of active devices that the system carries and that it has a lower decoding complexity.

    Table and Figures | Reference | Related Articles | Metrics
    Research on a clustering-assisted intelligent spectrum allocation technique
    ZHAO Haoqin, YANG Zheng, SI Jiangbo, SHI Jia, YAN Shaohu, DUAN Guodong
    Journal of Xidian University    2023, 50 (6): 1-12.   DOI: 10.19665/j.issn1001-2400.20231006
    Abstract230)   HTML31)    PDF(pc) (3593KB)(173)       Save

    Aiming at the problem of low spectrum utilization of the traditional spectrum allocation scheme in a large-scale and high dynamic electromagnetic spectrum warfare system,intelligent spectrum allocation technology research is carried out.In this paper,first,we construct a complex and highly dynamic electromagnetic spectrum combat scenario,and under the coexistence conditions of multiple types of equipment such as radar,communication and jamming,we model the spectrum allocation of the complex electromagnetic environment as an optimization problem to maximize the number of access devices.Second,an intelligent spectrum allocation algorithm based on clustering assistance is proposed.Aiming at the centralized resource allocation algorithm facing the problem of exploding action space dimensions,a multi-DDQN network is used to characterize the decision-making information of each node.Then based on the elbow law and K-means++ algorithm,a multi-node collaborative approach is proposed,where nodes within a cluster make chained decisions by sharing action information and nodes between clusters make independent decisions,assisting the DDQN algorithm to intelligently allocate resources.By designing the state,action space and reward function,and adopting the variable learning rate to realize the fast convergence of the algorithm,the nodes are able to dynamically allocate the multidimensional resources such as frequency/energy according to the electromagnetic environment changes.Simulation results show that under the same electromagnetic environment,when the number of nodes is 20,the number of accessible devices of the proposed algorithm is increased by about 80% compared with the number by the greedy algorithm,and about 30% compared with that by the genetic algorithm,which is more suitable for the spectrum allocation of multi-devices under dynamic electromagnetic environment.

    Table and Figures | Reference | Related Articles | Metrics
    Work pattern recognition method based on feature fusion
    LIU Gaogao, HUANG Dongjie, XI Xin, LI Hao, CAO Xuyuan
    Journal of Xidian University    2023, 50 (6): 13-20.   DOI: 10.19665/j.issn1001-2400.20230705
    Abstract174)   HTML22)    PDF(pc) (810KB)(99)       Save

    Operational pattern recognition is one of the important means in the field of intelligence reconnaissance and electronic countermeasures,which is to determine the function and behavior of radar through signal processing and analysis.With the diversification of modern airborne radar functions,the corresponding signal styles are becoming more and more complex,and the increasingly complex reconnaissance environment also leads to the uneven quality of reconnaissance signals,which brings about great difficulties to the traditional operational pattern recognition methods.To solve this problem,based on the existing work pattern recognition methods,a new work pattern recognition method is proposed,which integrates parameter feature recognition and D-S evidence theory recognition.First,for the radiation source characteristic signals processed by each reconnaissance plane,the feature parameter recognition algorithm is used to quickly obtain the working mode information,and the recognition results are verified by the D-S evidence theory.Second,for the signal that can not be recognized by a single platform,the method of D-S evidence theory fusion recognition is used to distinguish the working mode.From the theoretical analysis,it can be concluded that the algorithm has the advantages of fast operation speed and simple structure,and that the new fusion recognition method can improve the recognition accuracy of the working mode.Finally,the feasibility of the method is verified by simulation.

    Table and Figures | Reference | Related Articles | Metrics
    Efficient semantic communication method for bandwidth constrained scenarios
    LIU Wei, WANG Mengyang, BAI Baoming
    Journal of Xidian University    2024, 51 (3): 9-18.   DOI: 10.19665/j.issn1001-2400.20240203
    Abstract168)   HTML31)    PDF(pc) (1035KB)(136)       Save

    Semantic communication provides a new research perspective for communication system optimization and performance improvement.However,current research on semantic communication ignores the impact of communication overhead and does not consider the relationship between semantic communication performance and communication overhead,resulting in difficulty in improving semantic communication performance when the bandwidth resource is limited.Therefore,an information bottleneck based semantic communication method for text sources is proposed.First,the Transformer model is used for semantic and channel joint encoding and decoding,and a feature selection module is designed to identify and delete redundant information,and then an end-to-end semantic communication model is constructed in the method;Second,considering the tradeoff between semantic communication performance and communication cost,a loss function is designed based on the information bottleneck theory to ensure the semantic communication performance,reduce the communication cost,and complete the training and optimization of the semantic communication model.Experimental results show that on the proceedings of the European Parliament,compared with the baseline model,the proposed method can reduce communication overhead by 20%~30% while ensuring communication performance.Under the same bandwidth conditions,the BLEU score of this method can be increased by 5%.Experimental results prove that the proposed method can effectively reduce the semantic communication overhead,thereby improving semantic communication performance when the bandwidth resource is limited.

    Table and Figures | Reference | Related Articles | Metrics
    Time series anomaly detection based on multi-scale feature information fusion
    HENG Hongjun, YU Longwei
    Journal of Xidian University    2024, 51 (3): 203-214.   DOI: 10.19665/j.issn1001-2400.20230906
    Abstract167)   HTML24)    PDF(pc) (2089KB)(61)       Save

    Currently,most time series lack corresponding anomaly labels and existing reconstruction-based anomaly detection algorithms fail to capture the complex underlying correlations and temporal dependencies among multidimensional data effectively.To construct feature-rich time series,a multi-scale feature information fusion anomaly detection model is proposed.First,the model employs convolutional neural networks to perform feature convolution on different sequences within sliding windows,capturing local contextual information at different scales.Then,position encoding from the Transformer is utilized to embed the convolved time series windows,enhancing the positional relationships between each time series and its neighboring sequences within the sliding window.Time attention is introduced to capture the temporal autocorrelation of the data,and multi-head self-attention adaptively assigns different weights to different time series within the window.Finally,the reconstructed window data obtained through the down-sampling process is progressively fused with the local features and temporal context information at different scales.This process accurately reconstructs the original time series,with the reconstruction error used as the final anomaly score for anomaly determination.Experimental results indicate that the constructed model achieves improved F1 scores compared to the baseline models on both the SWaT and SMD datasets.On the high-dimensional and imbalanced WADI dataset,the F1 score decreases by 1.66% compared to the GDN model.

    Table and Figures | Reference | Related Articles | Metrics
    Research on lightweight and feature enhancement of SAR image ship targets detection
    GONG Junyang, FU Weihong, FANG Houzhang
    Journal of Xidian University    2024, 51 (2): 96-106.   DOI: 10.19665/j.issn1001-2400.20230407
    Abstract165)   HTML11)    PDF(pc) (2728KB)(94)       Save

    The accuracy of ship targets detection in sythetic aperture radar images is susceptible to the nearshore clutter.The existing detection algorithms are highly complex and difficult to deploy on embedded devices.Due to these problems a lightweight and high-precision SAR image ship target detection algorithm CA-Shuffle-YOLO(Coordinate Shuffle You Only Look Once) is proposed in this article.Based on the YOLO v5 target detection algorithm,the backbone network is improved in two aspects:lightweight and feature refinement.The lightweight module is introduced to reduce the computational complexity of the network and improve the reasoning speed,and a collaborative attention mechanism module is introduced to enhance the algorithm's ability to extract the detailed information on near-shore ship targets.In the feature fusion network,weighted feature fusion and cross-module fusion are used to enhance the ability of the model to fuse the detailed information on SAR ship targets.At the same time,the depth separable convolution is used to reduce the computational complexity and improve the real-time performance.Through the test and comparison experiments on the SSDD ship target detection dataset,the results show that the detection accuracy of CA-Shuffle-YOLO is 97.4%,the detection frame rate is 206FPS,and the required computational complexity is 6.1GFlops.Compare to the original YOLO v5,the FPS of our algorithm is 60FPS higher with the required computational complexity of our algorithm being only the 12% that of the ordinary YOLOv5.

    Table and Figures | Reference | Related Articles | Metrics
    Research on the interference combinational sequence generation algorithm for the intelligent countermeasure UAV
    MA Xiaomeng, GAO Meiguo, YU Mohan, LI Yunjie
    Journal of Xidian University    2023, 50 (6): 44-61.   DOI: 10.19665/j.issn1001-2400.20230903
    Abstract158)   HTML23)    PDF(pc) (8025KB)(76)       Save

    With the maturity and development of the autonomous navigation flight technology for the unmanned aerial vehicle(UAV),the phenomenon of the unauthorized UAV flying in controlled airspace appears,which brings a great hidden danger to personal safety and causes a certain degree of economic losses.The research of this paper is on improving the effectiveness of adaptive measurement and control and navigation interference in the unknown situation of UAV flight control on the basis of identifying the UAV flight status and real-time evaluation of countermeasure effectiveness,and finally realizing the intelligent countermeasure game between the non-intelligent UAV based on the combination of remote communication interference and navigation and positioning interference.In this paper,a game model of the anti-UAV system(AUS) and UAV confrontation is developed based on the original units of radar detection,GPS navigation positioning,UAV remote communication suppression jamming and GPS navigation suppression and spoofing.The mathematical model is constructed by using deep reinforcement learning and the Markov decision process.Meanwhile,the concept of situation assessment ring for the classification of the UVA flight status is proposed to provide basic information for network sensing jamming effectiveness.The near-end strategy optimization algorithm,maximum entropy optimization algorithm and actor-critic algorithm are respectively used to train the constructed intelligent AUS for many times,and finally the network parameters are generated to generate the intelligent interference combination sequence according to the UAV flight state and countermeasures efficiency.The intelligent interference combination sequences generated by various deep reinforcement learning algorithms in this paper all achieve the initial goal of deceiving UAVs,which verifies the effectiveness of the anti-UAVs system model.The comparison experiment shows that the proposed situation assessment loop is sufficient and effective in the aspect of AUS sensing interference effectiveness.

    Table and Figures | Reference | Related Articles | Metrics
    Time-varying channel prediction algorithm based on the attention denoising and complex LSTM network
    CHENG Yong, JIANG Fengyuan
    Journal of Xidian University    2024, 51 (1): 29-40.   DOI: 10.19665/j.issn1001-2400.20230203
    Abstract148)   HTML17)    PDF(pc) (1707KB)(101)       Save

    With the development of wireless communication technology,the research on communication technology in high-speed scenario is becoming more and more extensive,one aspect of which is that obtaining accurate channel state information is of great significance to improving the performance of a wireless communication system.In order to solve the problem that the existing channel prediction algorithms for orthogonal Frequency Division multiplexing(OFDM) systems do not consider the influence of noise and the low prediction accuracy in high-speed scenarios,a time-varying channel prediction algorithm based on attention denoising and complex convolution LSTM is proposed.First,a channel attention channel denoising network is proposed to denoise the channel state information,which reduces the influence of noise on the channel state information.Second,a channel prediction model based on the complex convolutional layer and long short term memory(LSTM) is constructed.The channel state information at the historical moment after denoising is extracted,and then it is input into the channel prediction model to predict the channel state information at the future moment.The improved LSTM prediction model enhances the ability to extract channel timing features and improves the accuracy of channel prediction.Finally,the Adam optimizer is used to predict the channel state information at the future time.Simulation results show that the proposed time-varying channel prediction algorithm based on the attention denoising and complex convolutional LSTM network method has a higher prediction accuracy for the channel state information than the comparison algorithm.At the same time,the proposed method can be applied to the time-varying channel prediction in high-speed moving scenarios.

    Table and Figures | Reference | Related Articles | Metrics
    Fast algorithm for intelligent optimization of the cross ambiguity function of passive radar
    CHE Jibin, WANG Changlong, JIA Yan, REN Zizheng, LIU Chunheng, ZHOU Feng
    Journal of Xidian University    2023, 50 (6): 21-33.   DOI: 10.19665/j.issn1001-2400.20231003
    Abstract146)   HTML13)    PDF(pc) (5554KB)(67)       Save

    The passive radar system realizes the target detection by receiving the direct wave signal from the emitter and the target echo signal.The cross ambiguity function is an important means to improve the coherent accumulation of the echo signal.However,the echo signal received by the passive radar is very weak,so it is necessary to increase the accumulation time to improve the estimation accuracy.When the target speed is fast,the frequency search range increases.In order to achieve a range of target detection requirements and take into account the real-time performance of data processing,it is of great significance to study the fast calculation method of the cross ambiguity function,and due to the objective requirements of long-time accumulation and large-scale time-frequency search,the computation of the cross ambiguity function is huge,which makes it difficult for the traditional accelerated calculation method based on ergodic search to meet the real-time requirements of system processing.In order to improve the efficiency of cross ambiguity function optimization,a time-frequency difference calculation method based on multi-group feature optimization is proposed in this paper.By deeply analyzing the characteristics of typical digital TV signals,a two-stage cross ambiguity intelligent optimization fast calculation method based on target characteristics is designed in the framework of particle swarm optimization theory.By designing an effective search strategy,this method introduces the multi-population iteration mechanism and shrinkage factor,which avoids the disadvantages of the traditional method of redundant computation.On the premise of ensuring the calculation accuracy,the time-frequency point calculation is greatly reduced,and the search efficiency of cross ambiguity function is improved.

    Table and Figures | Reference | Related Articles | Metrics
    Resource optimization algorithm for unmanned aerial vehicle jammer assisted cognitive covert communications
    LIAO Xiaomin, HAN Shuangli, ZHU Xuan, LIN Chushan, WANG Haipeng
    Journal of Xidian University    2023, 50 (6): 75-83.   DOI: 10.19665/j.issn1001-2400.20230603
    Abstract142)   HTML7)    PDF(pc) (1909KB)(66)       Save

    Aiming at the covert communication scenario of an unmanned aerial vehicle(UAV) jammer assisted cognitive radio network,a transferred generative adversarial network based resource optimization algorithm is proposed for the UAV’s joint trajectory and transmit power optimization problem.First,based on the actual covert communication scenario,the UAV jammer assisted cognitive covert communication model is constructed.Then,a transferred generative adversarial network based resource allocation algorithm is designed,which introduces a transfer learning and generative adversarial network.The algorithm consists of a source domain generator,a target domain generator,and a discriminator,which extract the main resource allocation features of legitimate users not transmitting covert message by transfer learning,then transform the whole covert communication process into an interactive game between the legitimate users and the eavesdropping,alternatively train the target domain generator and discriminator in a competitive manner,and achieve the Nash equilibrium to obtain resource optimization solution for the covert communications.Numerical results show that the proposed algorithm can attain near-optimal resource optimization solution for the covert communication and achieve rapid convergence under the assumptions of knowing the channel distribution information and not knowing the detection threshold of the eavesdropper.

    Table and Figures | Reference | Related Articles | Metrics
    Double windows sliding decoding of spatially-coupled quantum LDPC codes
    WANG Yunjiang, ZHU Gaohui, YANG Yuting, MA Zhong, WEI Lu
    Journal of Xidian University    2024, 51 (1): 11-20.   DOI: 10.19665/j.issn1001-2400.20230301
    Abstract142)   HTML14)    PDF(pc) (1669KB)(102)       Save

    Quantum error-correcting codes are the key way to address the issue caused by the inevitable noise along with the quantum computing process.Spatially coupled quantum LDPC codes,as their classical counterparts,can achieve a good balance between the error-correcting capacity and the decoding delay in principle.By considering the problems of high complexity and long decoding delay caused by the standard belief propagation algorithm(BPA) for decoding the spatially coupled quantum LDPC codes(SC-QLDPCs),a quantum version of the sliding decoding scheme,named the double window sliding decoding algorithm is proposed in this paper.The proposed algorithm is inspired by the idea of classical sliding window decoding strategies and by exploiting the non-zero diagonal bands on the principal and sub-diagonals structure of the corresponding two parity-check matrices(PCMs) of the concerned SC-QLDPC.The phase and bit flipping error syndromes of the received codeword are obtained by sliding the two windows along the principal and sub-diagonals of the two classical PCMs simultaneously,which enables a good trade-off between complexity and decoding delay to be obtained by using the proposed strategy,with numerical results given to verify the performance of the proposed double window sliding decoding scheme.Simulation results show that the proposed algorithm can not only offer a low latency decoding output but also provide a decoding performance approaching that of the standard BPA when enlarging the window size,thus improving the application scenarios of the SC-QLDPC significantly.

    Table and Figures | Reference | Related Articles | Metrics
    UAV swarm power allocation strategy for resilient topology construction
    HU Jialin, REN Zhiyuan, LIU Anni, CHENG Wenchi, LIANG Xiaodong, LI Shaobo
    Journal of Xidian University    2024, 51 (2): 28-45.   DOI: 10.19665/j.issn1001-2400.20230314
    Abstract138)   HTML8)    PDF(pc) (5173KB)(71)       Save

    A topology construction method of the Unmanned combat network with strong toughness is proposed for the problem of network performance degradation and network paralysis caused by the failure of the Unmanned combat network itself or interference by enemy attack.The method first takes the edge-connectivity as the toughness indicator of the network;second,the minimum cut is used as the measure of the toughness indicator based on the maximum flow minimum cut(Max-flow min-cut) theorem,on the basis of which considering the limited power of a single UAV and the system,the topology is constructed by means of power allocation to improve the network toughness from the physical layer perspective,and the power allocation strategy of the Unmanned combat network under power constraint is proposed;finally,particle swarm optimization(PSO) algorithm is used to solve the topology toughness optimization problem under the power constraint.Simulation results show that under the same modulation and power constraints,the power allocation scheme based on the PSO algorithm can effectively improve the toughness of the Unmanned combat network compared with other power allocation algorithms in the face of link failure mode and node failure mode,and that the average successful service arrival rate of the constructed network remains above 95% in about 66.7% of link failures,which meets the actual combat requirements.

    Table and Figures | Reference | Related Articles | Metrics
    Generative adversarial model for radar intra-pulse signal denoising and recognition
    DU Mingyang, DU Meng, PAN Jifei, BI Daping
    Journal of Xidian University    2023, 50 (6): 133-147.   DOI: 10.19665/j.issn1001-2400.20230312
    Abstract134)   HTML13)    PDF(pc) (14404KB)(76)       Save

    While deep neural networks have achieved an impressive success in computer vision,the related research remains embryonic in radio frequency signal processing,i.e.,a vital task in modern wireless systems,for example,the electronic reconnaissance system.Noise corruption is a harmful but unavoidable factor causing severe performance degradation in the signal processing procedure,and thus has persistently been an intractable problem in the radio frequency domain.For example,a classifier trained on the high signal-to-noise ratio(SNR) data might experience a severe performance degradation when dealing with low SNR data.To address this problem,in this paper we leverage the powerful data representation capacity of deep learning and propose a Generative Adversarial Denoising and classification Network(GADNet) for radar signal restoration and a classification task.The proposed GADNet consists of a generator,a discriminator and a classifier fulfilling an end-to-end workflow.The encoder-decoder structure generator is trained to extract the high-level features and recover signals.Meanwhile,it fools the discriminator’s judges by bewildering the denoising results coming from the clean data.The classification loss from the classifier is adopted jointly to the training procedure.Extensive experiments demonstrate the benefit of the proposed technique in terms of high-quality restoration and accurate classification for radar signals with intense noise.Moreover,it also exhibits superior transferability in low SNR environments compared to the state-of-the-art methods.

    Table and Figures | Reference | Related Articles | Metrics
    Electromagnetic calculation of radio wave propagation in electrically large mountainous terrain environment
    WANG Nan, LIU Junzhi, CHEN Guiqi, ZHAO Yanan, ZHANG Yu
    Journal of Xidian University    2024, 51 (1): 21-28.   DOI: 10.19665/j.issn1001-2400.20230210
    Abstract133)   HTML8)    PDF(pc) (1446KB)(83)       Save

    In emerging industries such as unmanned aerial vehicles and drones,the signal coverage requirements are high,not only in the city,but in the inaccessible mountains,deserts,and forests also wireless signal coverage is needed to truly complete remote control.These areas need to consider the impact of terrain changes on electromagnetic transmission.The Uniform Geometrical Theory of Diffraction method in Computational Electromagnetic is an effective method to analyze electromagnetic problems in electrically large environments and this paper uses the method of computational electromagnetics to study the propagation of electromagnetic waves in mountainous environments.A new method of constructing an irregular terrain model is presented.The available terrain data can be generated by the cubic surface algorithm,and the irregular terrain is spliced by multiple cubic surfaces.The accuracy of the model data is verified by the mean root mean square error.Based on the topographic data,a parallel 3D geometric optical algorithm is completed,and the distribution of the regional electromagnetic field is simulated.The actual mountain terrain environment is selected for field measurement,and the comparison trend between the measurement results and the simulation results is consistent,which verifies the effectiveness of the method in the analysis of electromagnetic wave propagation in the irregular terrain.Considering the scale of environmental electromagnetic computation,a parallel strategy is established,and the parallel efficiency of 100 cores test can be kept to be above 80%.

    Table and Figures | Reference | Related Articles | Metrics
    High precision time synchronization between nodes under motion scenario of UAV platforms
    CHEN Cong, DUAN Baiyu, XU Qiang, PAN Wensheng, MA Wanzhi, SHAO Shihai
    Journal of Xidian University    2024, 51 (3): 19-29.   DOI: 10.19665/j.issn1001-2400.20231207
    Abstract126)   HTML12)    PDF(pc) (1363KB)(72)       Save

    Time synchronization is the foundation for transmission resource scheduling,cooperative localization and data fusion in UAV clusters.Two-way time synchronization is commonly used to synchronize time between nodes in scenarios with high synchronization accuracy requirements.However,the relative motion of the UAVs will cause the propagation delays of the two synchronization messages to be unequal,thereby causing time synchronization errors.To solve this problem,the causes of synchronization deviation are analyzed from the perspective of solving linear equations.A method is proposed to increase the number of equations by conducting two-way time synchronization twice,with the number of unknown quantities being reduced under the premise of the uniform motion of nodes.The solution formula for the clock deviation under uniform motion of nodes is derived,and the derivation results show that the clock deviation solution is independent of the speed of the nodes.Synchronization performance is compared with that of existing compensation methods under the additive Gaussian white noise channel.The effect of time stamp deviation and speed changing on the accuracy of the clock deviation solution is analyzed.Finally,the effectiveness of the dual-trigger two-way time synchronization is verified through field experiments.Simulation and experiment results show that,compared with conventional two-way time synchronization,the dual-trigger two-way time synchronization does not cause systematic deviations by the uniform motion of nodes.

    Table and Figures | Reference | Related Articles | Metrics
    Research on aviation ad hoc network routing protocols in highly dynamic and complex scenarios
    JIANG Laiwei, CHEN Zheng, YANG Hongyu
    Journal of Xidian University    2024, 51 (1): 72-85.   DOI: 10.19665/j.issn1001-2400.20230313
    Abstract125)   HTML8)    PDF(pc) (1772KB)(66)       Save

    With the rapid enlargement of the air transportation scale,the aviation ad hoc network(AANET) communication based on the civil aviation aircraft has possessed the capacities of communication network coverage.To find an effective means of important data transmission of aircraft nodes in highly dynamic and uncertain complex scenarios and backup them safely has become more important for improving the reliability and management abilities of the air-space-ground integrated network.However,the characteristics of the AANET,such as high dynamic change of network topology,large network span,and unstable network links,have brought severe challenges to the design of AANET protocols,especially the routing protocols.In order to facilitate the future research on the design of AANET routing protocols,this paper comprehensively analyzes the relevant requirements of AANET routing protocol design and investigates the existing routing protocols.First,according to characteristics of the AANET,this paper analyzes the factors,challenges,and design principles that need to be considered in the design of the routing protocols.Then,according to the design characteristics of existing routing protocols,this paper classifies and analyzes the existing routing protocols of the AANET.Finally,the future research focus of the routing protocols for the AANET is analyzed,so as to provide reference for promoting the research on the next generation of the air-space-ground integrated network in China.

    Table and Figures | Reference | Related Articles | Metrics
    Medicaldata privacy protection scheme supporting controlled sharing
    GUO Qing, TIAN Youliang
    Journal of Xidian University    2024, 51 (1): 165-176.   DOI: 10.19665/j.issn1001-2400.20230104
    Abstract121)   HTML8)    PDF(pc) (1588KB)(63)       Save

    The rational use of patient medical and health data information has promoted the development of medical research institutions.Aiming at the current difficulties in sharing medical data between patients and medical research institutions,data privacy is easy to leak,and the use of medical data is uncontrollable,a medical data privacy protection scheme supporting controlled sharing is proposed.Firstly,the blockchain and proxy server are combined to design a medical data controlled sharing model that the blockchain miner nodes are distributed to construct proxy re-encryption keys,and the proxy server is used to store and convert medical data ciphertext,and proxy re-encryption technology is used to bring about the secure sharing of medical data while protecting the privacy of patients.Secondly,a dynamic adjustment mechanism of user permissions is designed that the patient and the blockchain authorization management nodes update the access permissions of medical data through the authorization list to realize the controllable sharing of medical data by patients.Finally,the security analysis shows that the proposed scheme can bring about the dynamic sharing of medical data while protecting the privacy of medical data,and can also resist collusion attacks.Performance analysis shows that this scheme has advantages in communication overhead and computing overhead,and is suitable for controlled data sharing between patients or hospitals and research institutions.

    Table and Figures | Reference | Related Articles | Metrics
    Multi-scale convolutional attention network for radar behavior recognition
    XIONG Jingwei, PAN Jifei, BI Daping, DU Mingyang
    Journal of Xidian University    2023, 50 (6): 62-74.   DOI: 10.19665/j.issn1001-2400.20231005
    Abstract118)   HTML11)    PDF(pc) (8738KB)(77)       Save

    A radar behavior mode recognition framework is proposed aiming at the problems of difficult feature extraction and low recognition stability of the radar signal under a low signal-to-noise ratio,which is based on depth-wise convolution,multi-scale convolution and the self-attention mechanism.It improves the recognition ability in complex environment without increasing the difficulty of training.This algorithm employs depth-wise convolution to segregate weakly correlated channels in the shallow network.Subsequently,it utilizes multi-scale convolution to replace conventional convolution for multi-dimensional feature extraction.Finally,it employs a self-attention mechanism to adjust and optimize the weights of different feature maps,thus suppressing the influence of low and negative correlations in both channels and the spatial domains.Comparative experiments demonstrate that the proposed MSCANet achieves an average recognition rate of 92.25% under conditions of 0~50% missing pulses and false pulses.Compared to baseline networks such as AlexNet,ConvNet,ResNet,and VGGNet,the accuracy has been improved by 5% to 20%.The model exhibits stable recognition of various radar patterns and demonstrates enhanced generalization and robustness.Simultaneously,ablation experiments confirm the effectiveness of deep grouped convolution,multi-scale convolution,and the self-attention mechanism for radar behavior recognition.

    Table and Figures | Reference | Related Articles | Metrics
    Damage effect and protection design of the p-GaN HEMT induced by the high power electromagnetic pulse
    WANG Lei, CHAI Changchun, ZHAO Tianlong, LI Fuxing, QIN Yingshuo, YANG Yintang
    Journal of Xidian University    2023, 50 (6): 34-43.   DOI: 10.19665/j.issn1001-2400.20230502
    Abstract118)   HTML11)    PDF(pc) (4062KB)(72)       Save

    Nowadays,severe electromagnetic circumstances pose a serious threat to electronic systems.The excellent performance of gallium nitride based high electron mobility transistors makes them more suitable for high power and high frequency applications.With the continuous improvement in the quality of crystal epitaxial material and device manufacture technology,gallium nitride semiconductor devices are rapidly developing towards the direction of high power and miniaturization,which challenges the reliability and stability of devices.In this paper,the damage effects of the high power electromagnetic pulse(EMP) on the enhanced GaN high-electron-mobility transistor(HEMT) are investigated in detail.The mechanism is presented by analyzing the variation of the internal multiple physical quantities distribution in the device.It is revealed that the device damage is dominated by the different thermal accumulation effect such as self-heating,avalanche breakdown and hot carrier emission during the action of the high power EMP.Furthermore,the multi-scale protection design of the GaN HEMT against the high power electromagnetic interference(EMI) is presented and verified by simulation study.The device structure optimization results demonstrate that a proper passivation layer which enhances the breakdown characteristics can improve the anti-EMI capability.The circuit optimization presents the influences of external components on the damage progress.It is found that the resistive components which are in series at the source and gate will strengthen the capability of the device to withstand high power EMP damage.All above conclusions are important for device reliability design using gallium nitride materials,especially when the device operates under severe electromagnetic circumstances.

    Table and Figures | Reference | Related Articles | Metrics
    Workflow deployment method based on graph segmentation with communication and computation jointly optimized
    MA Yinghong, LIN Liwan, JIAO Yi, LI Qinyao
    Journal of Xidian University    2024, 51 (2): 13-27.   DOI: 10.19665/j.issn1001-2400.20231206
    Abstract118)   HTML15)    PDF(pc) (3074KB)(95)       Save

    For the purpose of improving computing efficiency,it becomes an important way for cloud data centers to deal with the continuous growth of computing and network tasks by decomposes complex large-scale tasks into simple tasks and modeling them into workflows,which are then completed by parallel distributed computing clusters.However,the communication bandwidth consumption caused by inter-task transmission can easily cause network congestion in data center.It is of great significance to deploy workflow scientifically,taking into account both computing efficiency and communication overhead.There are two typical types of workflow deployment algorithms:list-based workflow deployment algorithm and cluster-based workflow deployment algorithm.However,the former focuses on improving the computing efficiency while does not pay attention to the inter-task communication cost,so the deployment of large-scale workflow is easy to bring heavy network load.The latter focuses on minimizing the communication cost,but sacrifices the parallel computing efficiency of the tasks in the workflow,which results in a long workflow completion time.This work fully explores the dependency and parallelism between tasks in workflow,from the perspective of graph theory.By improving the classic graph segmentation algorithm,community discovery algorithm,the balance between minimizing communication cost and maximizing computation parallelism was achieved in the process of workflow task partitioning.Simulation results show that,under different workflow scales,the proposed algorithm reduces the communication cost by 35%~50%,compared with the typical list-based deployment algorithm,and the workflow completion time by 50%~65%,compared with the typical cluster-based deployment algorithm.Moreover,its performance has good stability for workflows with different communication-calculation ratios.

    Table and Figures | Reference | Related Articles | Metrics
    Highly dynamic multi-channel TDMA scheduling algorithm for the UAV ad hoc network in post-disaster
    SUN Yanjing, LI Lin, WANG Bowen, LI Song
    Journal of Xidian University    2024, 51 (2): 56-67.   DOI: 10.19665/j.issn1001-2400.20230414
    Abstract116)   HTML6)    PDF(pc) (1608KB)(80)       Save

    Extreme emergencies,mainly natural disasters and accidents,have posed serious challenges to the rapid reorganization of the emergency communication network and the real-time transmission of disaster information.It is urgent to build an emergency communication network with rapid response capabilities and dynamic adjustment on demand.In order to realize real-time transmission of disaster information under the extreme conditions of "three interruptions" of power failure,circuit interruption and network connection,the Flying Ad Hoc Network can be formed by many unmanned aerial vehicles to cover the network communication in the disaster-stricken area.Aiming at the channel collision problem caused by unreasonable scheduling of FANET communication resources under the limited conditions of complex environment after disasters,this paper proposes a multi-channel time devision multiple access(TDMA) scheduling algorithm based on adaptive Q-learning.According to the link interference relationship between UAVs,the vertex interference graph is established,and combined with the graph coloring theory,and the multi-channel TDMA scheduling problem is abstracted into a dynamic double coloring problem in highly dynamic scenarios.Considering the high-speed mobility of the UAV,the learning factor of Q-learning is adaptively adjusted according to the change of network topology,and the trade-off optimization of the convergence speed of the algorithm and the exploration ability of the optimal solution is realized.Simulation experiments show that the proposed algorithm can realize the trade-off optimization of network communication conflict and convergence speed,and can solve the problem of resource allocation decision and fast-changing topology adaptation in post-disaster high-dynamic scenarios.

    Table and Figures | Reference | Related Articles | Metrics
    Time series prediction method based on the bidirectional long short-term memory network
    GUAN Yepeng, SU Guangyao, SHENG Yi
    Journal of Xidian University    2024, 51 (3): 103-112.   DOI: 10.19665/j.issn1001-2400.20231205
    Abstract112)   HTML12)    PDF(pc) (2614KB)(48)       Save

    Time series prediction means the use of historical time series to predict a period of time in the future,so as to formulate corresponding strategies in advance.At present,the categories of time series are complex and diverse.However,existing time series prediction models cannot achieve stable prediction results when faced with multiple types of time series data.The application requirements of complex time series data prediction in reality are difficult to simultaneously meet.To address the problem,a time series prediction method is proposed based on the Bidirectional Long and Short-term Memory(BLSTM) with the attention mechanism.The improved forward and backward propagation mechanisms are used to extract temporal information.The future temporal information is inferred through an adaptive weight allocation strategy.Specifically,an improved BLSTM is proposed to extract deep time series features and explore temporal dependencies of context by combining BLSTM and Long Short-term Memory(LSTM) networks,on the basis of which the proposed temporal attention mechanism is fused to achieve adaptive weighting of deep time series features,which improves the saliency expression ability of deep time series features.Experimental results demonstrate that the proposed method has a superior prediction performance in comparison with some representative methods in multiple time series datasets of different categories.

    Table and Figures | Reference | Related Articles | Metrics
    Self-supervised contrastive representation learning for semantic segmentation
    LIU Bochong, CAI Huaiyu, WANG Yi, CHEN Xiaodong
    Journal of Xidian University    2024, 51 (1): 125-134.   DOI: 10.19665/j.issn1001-2400.20230304
    Abstract110)   HTML3)    PDF(pc) (2895KB)(70)       Save

    To improve the accuracy of the semantic segmentation models and avoid the labor and time costs of pixel-wise image annotation for large-scale semantic segmentation datasets,this paper studies the pre-training methods of self-supervised contrastive representation learning,and designs the Global-Local Cross Contrastive Learning(GLCCL) method based on the characteristics of the semantic segmentation task.This method feeds global images and a series of image patches after local chunking into the network to extract global and local visual representations respectively,and guides the network training by constructing loss function that includes global contrast,local contrast,and global-local cross contrast,enabling the network to learn both global and local visual representations as well as cross-regional semantic correlations.When using this method to pre-train BiSeNet and transfer to the semantic segmentation task,compared with the existing self-supervised contrastive representational learning and supervised pre-training methods,the performance improvement of 0.24% and 0.9% mean intersection over union(MIoU) is achieved.Experimental results show that this method can improve the segmentation results by pre-training the semantic segmentation model with unlabeled data,which has a certain practical value.

    Table and Figures | Reference | Related Articles | Metrics
    Adaptivedensity peak clustering algorithm
    ZHANG Qiang, ZHOU Shuisheng, ZHANG Ying
    Journal of Xidian University    2024, 51 (2): 170-181.   DOI: 10.19665/j.issn1001-2400.20230604
    Abstract109)   HTML5)    PDF(pc) (3821KB)(55)       Save

    Density Peak Clustering(DPC) is widely used in many fields because of its simplicity and high efficiency.However,it has two disadvantages:① It is difficult to identify the real clustering center in the decision graph provided by DPC for data sets with an uneven cluster density and imbalance;② There exists a "chain effect" where a misallocation of the points with the highest density in a region will result in all points within the region pointing to the same false cluster.In view of these two deficiencies,a new concept of Natural Neighbor(NaN) is introduced,and a density peak clustering algorithm based on the natural neighbor(DPC-NaN) is proposed which uses the new natural neighborhood density to identify the noise points,selects the initial preclustering center point,and allocates the non-noise points according to the density peak method to get the preclustering.By determining the boundary points and merging radius of the preclustering,the results of the preclustering can be adaptively merged into the final clustering.The proposed algorithm eliminates the need for manual parameter presetting and alleviates the problem of "chain effect".Experimental results show that compared with the correlation clustering algorithm,the proposed algorithm can obtain better clustering results on typical data sets and perform well in image segmentation.

    Table and Figures | Reference | Related Articles | Metrics
    Study of EEG classification of depression by multi-scale convolution combined with the Transformer
    ZHAI Fengwen, SUN Fanglin, JIN Jing
    Journal of Xidian University    2024, 51 (2): 182-195.   DOI: 10.19665/j.issn1001-2400.20230211
    Abstract108)   HTML10)    PDF(pc) (2907KB)(66)       Save

    In the process of using the deep learning model to classify the EEG signals of depression,aiming at the problem of insufficient feature extraction in single-scale convolution and the limitation of the convolutional neural network in perceiving the global dependence of EEG signals,a multi-scale dynamic convolution network module and the gated transformer encoder module are designed respectively,which are combined with the temporal convolution network,and a hybrid network model MGTTCNet is proposed to classify the EEG signals of patients with depression and healthy controls.First,multi-scale dynamic convolution is used to capture the multi-scale time-frequency information of EEG signals from spatial and frequency domains.Second,the gated transformer encoder is used to learn global dependencies in EEG signals,which effectively enhances the ability of the network to express relevant EEG signal features using the multi-head attention mechanism.Third,the temporal convolution network is used to extract temporal features available for EEG signals.Finally,the extracted abstract features are fed into the classification module for classification.The proposed model is experimentally validated on the public data set MODMA using the Hold-out method and the 10-Fold Cross Validation method,with the classification accuracy being 98.51% and 98.53%,respectively.Compared with the baseline single-scale model EEGNet,the classification accuracy of the proposed model is increased by 1.89% and 1.93%,the F1 value is increased by 2.05% and 2.08%,and the kappa coefficient values are increased by 0.0381 and 0.0385,respectively.Meanwhile,the ablation experiments verify the effectiveness of each module designed in this paper.

    Table and Figures | Reference | Related Articles | Metrics
    Doppler frequency shift estimation and the tracking algorithm for air-to-air high-speed mobile communications
    ZHANG Xin, LI Jiandong
    Journal of Xidian University    2024, 51 (3): 30-37.   DOI: 10.19665/j.issn1001-2400.20240304
    Abstract107)   HTML16)    PDF(pc) (1484KB)(57)       Save

    Under air-to-air high-speed mobile communications,the Doppler frequency shift of the Aerial platform has characteristics of a large range and rapid change.It is difficult for existing frequency estimation algorithms to tackle both high estimation accuracy and engineering realization feasibility.In this paper,the time-varying Doppler frequency shift model is first constructed according to the traditional frequency offset model and the spatiotemporal correlation of Doppler frequency shift versus time.Based on this model,the coarse frequency offset estimation values of adjacent short preambles are associated.The optimization problem of frequency offset estimation is transformed into a classic optimization problem of overdetermined linear equations,which reduces the estimation variance to the maximum extent and improves the estimation accuracy.Simulation results show that the residual frequency offset of the proposed algorithm is reduced significantly compared with the traditional algorithm.Simulation results show that the root mean square error(RMSE) of the proposed algorithm is less than 100 Hz when the SNR is greater than 5 dB.Aiming at the numerical stability problem existing in the proposed algorithm,the corresponding engineering realizable method is given in the paper.Unlike the traditional phase-locked loop feedback tracking scheme,the proposed algorithm adopts a feedforward compensation scheme,thereby improving the system stability and timeliness.

    Table and Figures | Reference | Related Articles | Metrics
    Drone identification based on the normalized cyclic prefix correlation spectrum
    ZHANG Hanshuo, LI Tao, LI Yongzhao, WEN Zhijin
    Journal of Xidian University    2024, 51 (2): 68-75.   DOI: 10.19665/j.issn1001-2400.20230704
    Abstract106)   HTML6)    PDF(pc) (1621KB)(66)       Save

    Radio-frequency(RF)-based drone identification technology has the advantages of long detection distance and low environmental dependence,so that it has become an indispensable approach to monitoring drones.How to identify a drone effectively at the low signal-to-noise ratio(SNR) regime is a hot topic in current research.To ensure excellent video transmission quality,drones commonly adopt orthogonal frequency division multiplexing(OFDM) modulation with cyclic prefix(CP) as the modulation of video transmission links.Based on this property,we propose a drone identification algorithm based on the convolutional neural network(CNN) and normalized CP correlation spectrum.Specifically,we first analyze the OFDM symbol durations and CP durations of drone signals,on the basis of which the normalized CP correlation spectrum is calculated.When the modulation parameters of a drone signal match the calculated normalized CP correlation spectrum,several correlation peaks will appear in the normalized CP correlation spectrum.The positions of these peaks reflect the protocol characteristics of drone signals,such as frame structure and burst rules.Finally,for identifying drones,a CNN is trained to extract these characteristics from the normalized CP correlation spectrum.In this work,a universal software radio peripheral(USRP) X310 is utilized to collect the RF signals of five drones to construct the experimental dataset.Experimental results show that the proposed algorithm performs better than spectrum-based and spectrogram-based algorithms,and it remains effective at low SNRs.

    Table and Figures | Reference | Related Articles | Metrics
    Efficient manifold algorithm for the waveform design for precision jamming
    ZHANG Boyang, CHU Yi, YANG Zhongping, ZHOU Qingsong
    Journal of Xidian University    2023, 50 (6): 84-92.   DOI: 10.19665/j.issn1001-2400.20231001
    Abstract103)   HTML10)    PDF(pc) (3224KB)(68)       Save

    Precision jamming is a new concept in the field of electronic warfare.The core idea is to adopt a group of drone swarms equipped with jammers as ultra-sparse arrays to transmit the jamming waveform,which aims to implement blanket jamming to the opponent equipment in the spatial domain precisely and ensures that the friendly equipment is not being affected.However,the existing methods apply only to some specific scenarios,and they need to be improved in computational efficiency.In this case,this paper proposes an efficient waveform design algorithm based on the complex circle manifold to improve the computational efficiency,which can control the energy level in the target and friendly regions according to the practical requirement.First,we establish a novel multi-objective optimization problem(MOP) with unimodular constraints according to the precision jamming geometric model and the worst case of jamming energy distribution in the spatial domain.Then,we adopt the Lp-norm to smooth and approximate the minimax objective function.Finally,the MOP with unimodular constraints is viewed as an unconstrainted problem under the complex circle manifold from the perspective of the Riemann geometry,with the Riemannian Conjugate Gradient(RCG) algorithm employed to solve the problem efficiently.Simulation results are provided to demonstrate that the proposed algorithm can control the energy level in different regions by adjusting the regularization parameter,which meets the requirement of different scenarios and tasks of precision jamming.Moreover,it has a lower computational complexity and improves the computational efficiency for the precision jamming waveform design as compared to the existing methods.

    Table and Figures | Reference | Related Articles | Metrics
    Encrypted deduplication scheme with access control and key updates
    HA Guanxiong, JIA Qiaowen, CHEN Hang, JIA Chunfu, LIU Lanqing
    Journal of Xidian University    2023, 50 (6): 195-206.   DOI: 10.19665/j.issn1001-2400.20230306
    Abstract102)   HTML8)    PDF(pc) (2150KB)(53)       Save

    In the scenario of data outsourcing,access control and key update have an important application value.However,it is hard for existing encrypted deduplication schemes to provide flexible and effective access control and key update for outsourcing user data.To solve this problem,an encrypted deduplication scheme with access control and key updates is proposed.First,an efficient access control scheme for encrypted deduplication is designed based on the ciphertext-policy attribute-based encryption and the proof of ownership.It combines access control with proof of ownership and can simultaneously detect whether a client has the correct access right and whole data content only through a round of interaction between the client and the cloud server,effectively preventing unauthorized access and ownership fraud attacks launched by adversaries.The scheme has features such as low computation overhead and few communication rounds.Second,by combining the design ideas of server-aided encryption and random convergent encryption,an updatable encryption scheme suitable for encrypted deduplication is designed.It is combined with the proposed access control scheme to achieve hierarchical and user-transparent key updates.The results of security analysis and performance evaluation show that the proposed scheme can provide confidentiality and integrity for outsourcing user data while achieving efficient data encryption,decryption,and key update.

    Table and Figures | Reference | Related Articles | Metrics
    Document image forgery localization and desensitization localization using the attention mechanism
    ZHENG Kengtao, LI Bin, ZENG Jinhua
    Journal of Xidian University    2023, 50 (6): 207-218.   DOI: 10.19665/j.issn1001-2400.20230105
    Abstract102)   HTML9)    PDF(pc) (6344KB)(49)       Save

    Some important documents such as contracts,certificates and notifications are often stored and disseminated in a digital format.However,due to the inclusion of key text information,such images are often easily illegally tampered with and used,causing serious social impact and harm.Meanwhile,taking personal privacy and security into account,people also tend to remove sensitive information from these digital documents.Malicious tampering and desensitization can both introduce extra traces to the original images,but there are differences in motivation and operations.Therefore,it is necessary to differentiate them to locate the tamper areas more accurately.To address this issue,we propose a convolutional encoder-decoder network,which has multi-level features of the encoder through U-Net connection,effectively learning tampering and desensitization traces.At the same time,several Squeeze-and-Excitation attention mechanism modules are introduced in the decoder to suppress image content and focus on weaker operation traces,to improve the detection ability of the network.To effectively assist network training,we build a document image forensics dataset containing common tampering and desensitization operations.Experimental results show that our model performs effectively both on this dataset and on the public tamper datasets,and outperforms comparison algorithms.At the same time,the proposed method is robust to several common post-processing operations.

    Table and Figures | Reference | Related Articles | Metrics
    Real-time smoke segmentation algorithm combining global and local information
    ZHANG Xinyu, LIANG Yu, ZHANG Wei
    Journal of Xidian University    2024, 51 (1): 147-156.   DOI: 10.19665/j.issn1001-2400.20230405
    Abstract100)   HTML4)    PDF(pc) (1887KB)(76)       Save

    The smoke segmentation is challenging because the smoke is irregular and translucent and the boundary is fuzzy.A dual-branch real-time smoke segmentation algorithm based on global and local information is proposed to solve this problem.In this algorithm,a lightweight Transformer branch and a convolutional neural networks branch are designed to extract the global and local features of smoke respectively,which can fully learn the long-distance pixel dependence of smoke and retain the details of smoke.It can distinguish smoke and background accurately and improve the accuracy of smoke segmentation.It can satisfy the real-time requirement of the actual smoke detection tasks.The multilayer perceptron decoder makes full use of multi-scale smoke features and further models the global context information of smoke.It can enhance the perception of multi-scale smoke,and thus improve the accuracy of smoke segmentation.The simple structure can reduce the computation of the decoder.The algorithm reaches 92.88% mean intersection over union on the self-built smoke segmentation dataset with 2.96M parameters and a speed of 56.94 frames per second.The comprehensive performance of the proposed algorithm is better than that of other smoke detection algorithms on public dataset.Experimental results show that the algorithm has a high accuracy and fast inference speed.The algorithm can meet the accuracy and real-time requirements of actual smoke detection tasks.

    Table and Figures | Reference | Related Articles | Metrics
    Hyperspectral image denoising based on tensor decomposition and adaptive weight graph total variation
    CAI Mingjiao, JIANG Junzheng, CAI Wanyuan, ZHOU Fang
    Journal of Xidian University    2024, 51 (2): 157-169.   DOI: 10.19665/j.issn1001-2400.20230412
    Abstract100)   HTML6)    PDF(pc) (2394KB)(63)       Save

    During the acquisition process of hyperspectral images,various noises are inevitably introduced due to the influence of objective factors such as observation conditions,material properties of the imager,and transmission conditions,which severely reduces the quality of hyperspectral images and limits the accuracy of subsequent processing.Therefore,denoising of hyperspectral images is an extremely important preprocessing step.For the hyperspectral image denoising problem,a denoising algorithm,which is based on low-rank tensor decomposition and adaptive weight graph total variation regularization named LRTDGTV,is proposed in this paper.Specifically,Low-rank tensor decomposition is used to characterize the global correlation among all bands,and adaptive weight graph total variation regularization is adopted to characterize piecewise smoothness property of hyperspectral images in the spatial domain and preserve the edge information of hyperspectral images.In addition,sparse noise,including stripe noise,impulse noise and deadline noise,and Gaussian noise are characterized by l1-norm and Frobenius-norm,respectively.Thus,the denoising problem can be formulated into a constrained optimization problem involving low-rank tensor decomposition and adaptive weight graph total variation regularization,which can be solved by employing the augmented Lagrange multiplier(ALM) method.Experimental results show that the proposed hyperspectral image denoising algorithm can fully characterize the inherent structural characteristics of hyperspectral images data and has a better denoising performance than the existing algorithms.

    Table and Figures | Reference | Related Articles | Metrics
    Improved double deep Q network algorithm for service function chain deployment
    LIU Daohua, WEI Dinger, XUAN Hejun, YU Changming, KOU Libo
    Journal of Xidian University    2024, 51 (1): 52-59.   DOI: 10.19665/j.issn1001-2400.20230310
    Abstract99)   HTML5)    PDF(pc) (869KB)(58)       Save

    Network Function Virtualization(NFV) has become the key technology of next generation communication.Virtual Network Function Service Chain(VNF-SC) mapping is the key issue of the NFV.To reduce the energy consumption of the communication network server and improve the quality of service,a Function Chain(SFC) deployment algorithm based on an improved Double Deep Q Network(DDQN) is proposed to reduce the energy consumption of network servers and improve the network quality of service.Due to the dynamic change of the network state,the service function chain deployment problem is modeled as a Markov Decision Process(MDP).Based on the network state and action rewards,the DDQN is trained online to obtain the optimal deployment strategy for the service function chain.To solve the problem that traditional deep reinforcement learning draws experience samples uniformly from the experience replay pool leading to low learning efficiency of the neural network,a prioritized experience replay method based on importance sampling is designed to draw experience samples so as to avoid high correlation between training samples to improve the learning efficiency of the neural network.Experimental results show that the proposed SFC deployment algorithm based on the improved DDQN can increase the reward value,and that compared with the traditional DDQN algorithm,it can reduce the energy consumption and blocking rate by 19.89%~36.99% and 9.52%~16.37%,respectively.

    Table and Figures | Reference | Related Articles | Metrics
    Lightweight centroid locating method for the satellite target
    LIU Luyuan, HAN Luyao, LI Jiaojiao, XIA Hui, RAO Peng, SONG Rui
    Journal of Xidian University    2023, 50 (6): 120-132.   DOI: 10.19665/j.issn1001-2400.20230307
    Abstract98)   HTML9)    PDF(pc) (8558KB)(59)       Save

    The space-based photoelectric detection unit is vital in satellite identification and positioning.It has a large field of view,a small load,and flexible maneuvering properties.However,the computational capability of the CPU mounted on an on-orbit satellite could be much higher,which can hardly afford the necessity of deep learning networks.In this paper,we analyze the character of space targets deeply and designed a lightweight real-time processing algorithm.We specifically design feature extractors for the line and satellite contour patterns in the algorithm and propose a minimum bounding box calculation strategy.The algorithm is tested on the simulation dataset and verified on the images captured by the physical emulation platform.Testing results demonstrate the effectiveness of our algorithm.The detecting accuracy of our algorithm is better than YOLOv5n,and the computational load is only 10% that of the competitive methods.We transplant the algorithm to an on-orbit real-time processing platform.The positioning speed reaches 5 120×5 120@5fps.The accuracy for angle measurement based on the centroid positioning results is more significant than 0.05°,which meets the requirement for real application systems.

    Table and Figures | Reference | Related Articles | Metrics
    Several classes of cryptographic Boolean functions with high nonlinearity
    LIU Huan, WU Gaofei
    Journal of Xidian University    2023, 50 (6): 237-250.   DOI: 10.19665/j.issn1001-2400.20230416
    Abstract97)   HTML15)    PDF(pc) (882KB)(54)       Save

    Boolean functions have important applications in cryptography.Bent functions have been a hot research topic in symmetric cryptography as Boolean functions have maximum nonlinearity.From the perspective of spectrum,bent functions have a flat spectrum under the Walsh-Hadamard transform.Negabent functions are a class of generalized bent functions,which have a uniform spectrum under the nega-Hadamard transform.A generalized negabent function is a function with a uniform spectrum under the generalized nega-Hadamard transform.Bent functions has been extensively studied since its introduction in 1976.However,there are few research on negabent functions and generalized negabent functions.In this paper,the properties of generalized negabent functions and generalized bent-negabent functions are analyzed.Several classes of generalized negabent functions,generalized bent-negabent functions,and generalized semibent-negabent functions are constructed.First,by analyzing a link between the nega-crosscorrelation of generalized Boolean function and the generalized nega-Hadamard transformation,a criterion for generalized negabent functions is presented.Based on this criterion,a class of generalized negabent functions is constructed.Secondly,two classes of generalized negabent functions of the form f(x)=c1f1(x(1))+c2f2(x(2))+…+crfr(x(r)) are constructed by using the direct sum construction.Finally,generalized bent-negabent functions and generalized semibent-negabent functions over Z8 are obtained by using the direct sum construction.Some new methods for constructing generalized negabent functions are given in this paper,which will enrich the results of negabent functions.

    Table and Figures | Reference | Related Articles | Metrics
    Precision jamming waveform design method for spatial and frequency domain joint optimization
    WANG Jing, ZHANG Kedi, ZHANG Jianyun, ZHOU Qinsong, WU Minglin, LI Zhihui
    Journal of Xidian University    2023, 50 (6): 93-104.   DOI: 10.19665/j.issn1001-2400.20230805
    Abstract95)   HTML5)    PDF(pc) (3199KB)(56)       Save

    Precision jamming technology is one of the hot research directions in current new electronic warfare.To solve the problem of accurately adjusting the spatial and frequency domain distribution characteristics of jamming power,a precision jamming waveform design method based on the alternating multiplier method is proposed.First,a mathematical model for the optimization problem of designing constant-modulus precision jamming waveforms under the joint optimization objective of spatial and frequency domain characteristics is presented.Furthermore,by introducing variables,the non-convex quartic term contained in the original objective function is transformed into a quadratic term,enabling the optimization problem to be solved by the alternating direction method of multipliers.Based on theoretical derivation,the optimal closed form solution for each iteration of the alternating direction method of multipliers is obtained,thereby reducing the computational complexity of the algorithm.Simulation experiments show that compared to existing precision jamming waveform design methods that only optimize the spatial jamming power distribution,the precision jamming waveform designed by this algorithm has better power spectrum distribution characteristics in the synthesized signal within the preset jamming area,which is more in line with the requirements of actual jamming tasks;meanwhile,compared to existing joint optimization algorithms in the spatial and frequency domains,the algorithm proposed in this paper considers the waveform constant-modulus constraint,which is more in line with the needs of engineering implementation.Moreover,the proposed algorithm is lower in computational complexity and it can further reduce the computational time through parallel computing.

    Table and Figures | Reference | Related Articles | Metrics
    Deduplication scheme with data popularity for cloud storage
    HE Xinfeng, YANG Qinqin
    Journal of Xidian University    2024, 51 (1): 187-200.   DOI: 10.19665/j.issn1001-2400.20230205
    Abstract93)   HTML8)    PDF(pc) (2040KB)(57)       Save

    With the development of cloud computing,more enterprises and individuals tend to outsource their data to cloud storage providers to relieve the local storage pressure,and the cloud storage pressure is becoming an increasingly prominent issue.To improve the storage efficiency and reduce the communication cost,data deduplication technology has been widely used.There are identical data deduplication based on the hash table and similar data deduplication based on the bloom filter,but both of them rarely consider the impact of data popularity.In fact,the data outsourced to the cloud storage can be divided into popular and unpopular data according to their popularity.Popular data refer to the data which are frequently accessed,and there are numerous duplicate copies and similar data in the cloud,so high-accuracy deduplication is required.Unpopular data,which are rarely accessed,have fewer duplicate copies and similar data in the cloud,and low-accuracy deduplication can meet the demand.In order to address this problem,a novel bloom filter variant named PDBF(popularity dynamic bloom filter) is proposed,which incorporates data popularity into the bloom filter.Moreover,a PDBF-based deduplication scheme is constructed to perform different degrees of deduplication depending on how popular a datum is.Experiments demonstrate that the scheme makes an excellent tradeoff among the computational time,the memory consumption,and the deduplication efficiency.

    Table and Figures | Reference | Related Articles | Metrics
    Optimization of light sources for the IRS-assisted indoor VLC system considering HPSA
    HE Huimeng, YANG Ting, SHI Huili, WANG Ping, BING Zhe, WANG Xing, BAI Bo
    Journal of Xidian University    2024, 51 (2): 46-55.   DOI: 10.19665/j.issn1001-2400.20240103
    Abstract92)   HTML10)    PDF(pc) (2522KB)(58)       Save

    Aiming at the problem of unevenness of optical power distribution on the receiving plane in a visible light communication(VLC) system,a light source optimization method for an intelligent reflecting surface(IRS)-assisted indoor VLC system based on the hybrid particle swarm algorithm(HPSA) is proposed.Taking the two layout schemes of rectangular and hybrid arrangements with 16 light-emitting diodes(LEDs) as examples,the variance of received optical power on the receiving plane is set as the fitness function,and the proposed HPSA is combined with the IRS technology to optimize the half-power angle and positional layout of LEDs as well as the yaw and roll angles of IRS.Subsequently,initial(unoptimized) optimization using the HPSA,and optimization using the HPSA for the IRS-aided VLC systems are simulated and compared.The results indicate that when considering the first reflection link,compared to the original VLC system,the fluctuations of received optical power and signal-to-noise ratio of the VLC system optimized with the HPSA significantly decrease for both light source layouts;the HPSA optimized IRS-aided indoor VLC system improves the received optical power fluctuations in the rectangular layout as well as the HPSA optimized VLC system,and its performance is significantly better than that of the HPSA optimized VLC system only in the hybrid layout for optical power fluctuations improvement.Among the three VLC systems,the IRS-aided VLC system based on HPSA optimization has the largest average received optical power.Besides,the average root mean square delay spread performance of the above three VLC systems using a hybrid layout is better than that of a rectangular layout.This work will benefit the study of light source distribution in indoor VLC systems.

    Table and Figures | Reference | Related Articles | Metrics
    Efficient seed generation method for software fuzzing
    LIU Zhenyan, ZHANG Hua, LIU Yong, YANG Libo, WANG Mengdi
    Journal of Xidian University    2024, 51 (2): 126-136.   DOI: 10.19665/j.issn1001-2400.20230901
    Abstract92)   HTML5)    PDF(pc) (1912KB)(50)       Save

    As one of the effective ways to exploit software vulnerabilities in the current software engineering field,fuzzing plays a significant role in discovering potential software vulnerabilities.The traditional seed selection strategy in fuzzing cannot effectively generate high-quality seeds,which results in the testcases generated by mutation being unable to reach deeper paths and trigger more security vulnerabilities.To address these challenges,a seed generation method for efficient fuzzing based on the improved generative adversarial network(GAN) is proposed which can flexibly expand the type of seed generation through encoding and decoding technology and significantly improve the fuzzing performance of most applications with different input types.In experiments,the seed generation strategy adopted in this paper significantly improved the coverage and unique crashes,and effectively increased the seed generation speed.Six open-sourced programs with different highly-structured inputs were selected to demonstrate the effectiveness of our strategy.As a result,the average branch coverage increased by 2.79%,the number of paths increased by 10.35% and additional 86.92% of unique crashes were found compared to the original strategy.

    Table and Figures | Reference | Related Articles | Metrics
    Subspace clustering algorithm optimized by non-negative Lagrangian relaxation
    ZHU Dongxia, JIA Hongjie, HUANG Longxia
    Journal of Xidian University    2024, 51 (1): 100-113.   DOI: 10.19665/j.issn1001-2400.20230204
    Abstract91)   HTML7)    PDF(pc) (2121KB)(63)       Save

    Spectral relaxation is widely used in traditional subspace clustering and spectral clustering.First,the eigenvector of the Laplacian matrix is calculated.The eigenvector contains negative numbers,and the result of the 2-way clustering can be obtained directly according to the positive and negative of the elements.For multi-way clustering problems,2-way graph partition is applied recursively or the k-means is used in eigenvector space.The assignment of the cluster label is indirect.The instability of clustering results will increase by this post-processing clustering method.For the limitation of spectral relaxation,a subspace clustering algorithm optimized by non-negative Lagrangian relaxation is proposed,which integrates self-representation learning and rank constraints in the objective function.The similarity matrix and membership matrix are solved by non-negative Lagrangian relaxation and the nonnegativity of the membership matrix is maintained.In this way,the membership matrix becomes the cluster posterior probability.When the algorithm converges,the clustering results can be obtained directly by assigning the data object to the cluster with the largest posterior probability.Compared with the existing subspace clustering and spectral clustering methods,the proposed algorithm designs a new optimization rule,which can realize the direct allocation of cluster labels without additional clustering steps.Finally,the convergence of the proposed algorithm is analyzed theoretically.Generous experiments on five benchmark clustering datasets show that the clustering performance of the proposed method is better than that of the recent subspace clustering methods.

    Table and Figures | Reference | Related Articles | Metrics
    Advances in security analysis of software-defined networking flow rules
    XIONG Wanyin, MAO Jian, LIU Ziwen, LIU Wenmao, LIU Jianwei
    Journal of Xidian University    2023, 50 (6): 172-194.   DOI: 10.19665/j.issn1001-2400.20230904
    Abstract91)   HTML6)    PDF(pc) (9557KB)(52)       Save

    With the increasing diversification of network functions,the software-defined networking(SDN) architecture,which provides centralized network control and programmability,has been deployed in various fields.However,the unique hierarchical structure and operation mechanism of SDN also introduce new security challenges,among which as the carrier of control plane management decisions and the basis of data plane network behavior,flow rules have become the focus of SDN attack and defense.Aiming at the security issues of flow rules in SDN,this paper first reviews the characteristics and security risks of the SDN architecture.Based on the mechanism of flow rules in SDN,the attacks against flow rules are systematically divided into two categories,namely,interference of control plane decision and violation in data plane implementation,with the attack examples introduced.Then,the methods for improving the security of flow rules are analyzed and classified into two categories,i.e.,checking and enhancing the security of flow rules.Furthermore,existing implementation mechanisms are summarized with their limitations briefly analyzed.In terms of flow rule security checking,two mainstream methods,i.e.,model-based checking and test-packet-based checking,are analyzed and discussed.In terms of flow rule security enhancement,three specific ideas based on permission control,conflict resolution and path verification are introduced and discussed.Finally,the future research trends of flow rule security are prospected.

    Table and Figures | Reference | Related Articles | Metrics
    Study of the parallel MoM on a domestic heterogeneous DCU platform
    JIA Ruipeng, LIN Zhongchao, ZUO Sheng, ZHANG Yu, YANG Meihong
    Journal of Xidian University    2024, 51 (2): 76-83.   DOI: 10.19665/j.issn1001-2400.20230504
    Abstract89)   HTML4)    PDF(pc) (2873KB)(46)       Save

    In view of the current development trend of the domestic supercomputer CPU+DCU heterogeneous architecture,the research on the CPU+DCU massively heterogeneous parallel higher-order method of moments is carried out.First,the basic implementation strategy of DCU to accelerate the calculation of the method of moments is given.Based on the load balancing parallel strategy of the isomorphic parallel moment of methods,an efficient heterogeneous parallel programming framework of "MPI+openMP+DCU" is proposed to address the problem of mismatch between computing tasks and computing power.In addition,the fine-grained task division strategy and asynchronous communication technology are adopted to optimize the design of the pipeline for the DCU computation process,thus realizing the overlapping of computation and communication and improving the acceleration performance of the program.The accuracy of the CPU+DCU heterogeneous parallel moment of methods is verified by comparing the simulation results with those by the finite element method.The scalability analytical results based on the domestic DCU heterogeneous platform show that the implemented CPU+DCU heterogeneous co-computing program can obtain 5.5~7.0 times acceleration effect at different parallel scales,and that the parallel efficiency reaches 73.5% when scaled from 360 nodes to 3600 nodes(1,036,800 cores in total).

    Table and Figures | Reference | Related Articles | Metrics
    Multi-objective optimization offloading decision with cloud-side-end collaboration in smart transportation scenarios
    ZHU Sifeng, SONG Zhaowei, CHEN Hao, ZHU Hai, QIAO Rui
    Journal of Xidian University    2024, 51 (3): 63-75.   DOI: 10.19665/j.issn1001-2400.20230802
    Abstract88)   HTML9)    PDF(pc) (3027KB)(42)       Save

    With the rapid development of intelligent transportation,the cloud computing network and the edge computing network,the information interaction among vehicle terminal,road base unit and central cloud server becomes more and more frequent.In view of how to efficiently realize vehicle-road-cloud integration fusion sensing,group decision making and reasonable allocation of re-sources between each server and the servers under the cloud-edge-terminal collaborative computing scenario of intelligent transportation,a network architecture based on the comprehensive convergence of the cloud-edge-terminal and intelligent transportation is designed.A network architecture based on the comprehensive integration of cloud-side-end and intelligent transportation is designed.Under this architecture,by reasonably dividing the task types,each server selectively caches and offloads them;under the collaborative computing scenario of the cloud-side-end of intelligent transportation,an adaptive caching model for tasks,a task offloading delay model,a system energy loss model,a model for evaluating the dissatisfaction of in-vehicle users with the quality of service,and a model for the multi-objective optimization problem are designed in turn,and a multi-objective optimization task offloading decision-making scheme is given based on the improved non-dominated genetic algorithms.Experimental results show that the proposed scheme can effectively reduce the delay and energy consumption brought by the task offloading process,improve the utilization rate of system resources,and bring better service experience to the vehicle user.

    Table and Figures | Reference | Related Articles | Metrics
    Histogram publishing algorithm for degree distribution via shuffled differential privacy
    DING Hongfa,FU Peiwang,PENG Changgen,LONG Shigong,WU Ningbo
    Journal of Xidian University    2023, 50 (6): 219-236.   DOI: 10.19665/j.issn1001-2400.20230207
    Abstract88)   HTML10)    PDF(pc) (7607KB)(48)       Save

    At present,the existing histogram publishing algorithms based on centralized or local differential privacy for graph data degree distribution can neither balance the privacy and utility of published data,nor preserve the identity privacy of end users.To solve this problem,a histogram publishing algorithm for degree distribution via shuffled differential privacy(SDP) is proposed under the framework of Encode-Shuffle-Analyze.First,a privacy preserving framework for histogram publishing of degree distribution is designed based on shuffled differential privacy.In this framework,the noisy impact that the encoder brings to distributed users is reduced by employing interactive user grouping,the shuffler and the square wave noise mechanism,while adding noise via local differential privacy.The noisy histogram of degree distribution is reconciled via the maximum likelihood estimation at the analyzer end,thus improving the utility of published data.Second,specific algorithms are proposed for concreting distributed user grouping,adding shuffled differential privacy noise and reconciling the noisy data,respectively.Furthermore,it is proved that these algorithms meet the requirement of(ε,σ)-SDP.Experiments and comparisons illustrate that the proposed algorithms can preserve the privacy of distributed users,and that the data utility is improved more than 26% with metrics in terms of L1 distance,H distance and MSE in comparison with the existing related algorithms.The proposed algorithms also perform with a low overhead and stable data utility,and are suitable for publishing and sharing the histogram of degree distribution for different scales of graph data.

    Table and Figures | Reference | Related Articles | Metrics
    Bidirectional adaptive differential privacy federated learning scheme
    LI Yang, XU Jin, ZHU Jianming, WANG Youwei
    Journal of Xidian University    2024, 51 (3): 158-169.   DOI: 10.19665/j.issn1001-2400.20230706
    Abstract86)   HTML7)    PDF(pc) (2749KB)(27)       Save

    With the explosive growth of personal data,the federated learning based on differential privacy can be used to solve the problem of data islands and preserve user data privacy.Participants share the parameters with noise to the central server for aggregation by training local data,and realize distributed machine learning training.However,there are two defects in this model:on the one hand,the data information in the process of parameters broadcasting by the central server is still compromised,with the risk of user privacy leakage;on the other hand,adding too much noise to parameters will reduce the quality of parameter aggregation and affect the model accuracy of federated learning.In order to solve the above problems,a bidirectional adaptive differential privacy federated learning scheme(Federated Learning Approach with Bidirectional Adaptive Differential Privacy,FedBADP) is proposed,which can adaptively add noise to the gradients transmitted by participants and central servers,and keep data security without affecting the model accuracy.Meanwhile,considering the performance limitations of the participants hardware devices,this model samples their gradients to reduce the communication overhead,and uses the RMSprop to accelerate the convergence of the model on the participants and central server to improve the accuracy of the model.Experiments show that our novel model can enhance the user privacy preserving while maintaining a good accuracy.

    Table and Figures | Reference | Related Articles | Metrics
    Secure K-prototype clustering against the collusion of rational adversaries
    TIAN Youliang, ZHAO Min, BI Renwan, XIONG Jinbo
    Journal of Xidian University    2024, 51 (2): 196-210.   DOI: 10.19665/j.issn1001-2400.20230305
    Abstract86)   HTML3)    PDF(pc) (1874KB)(51)       Save

    Aiming at the problem of data privacy leakage in cloud environment and collusion between cloud servers in the process of clustering,an cooperative secure K-prototype clustering scheme(CSKC) against the adversaries of rational collusion is proposed.First,considering that homomorphic encryption does not directly support nonlinear computing,secure computing protocols are designed based on homomorphic encryption and additive secret sharing to ensure that the input data and intermediate results are in the form of additive secret share,and to achieve accurate calculation of the security comparison function.Second,according to the game equilibrium theory,a variety of efficient incentive mechanisms are designed,and the mutual condition contract and report contract are constructed to constrain cloud servers to implement secure computing protocols honestly and non-collusively.Finally,the proposed protocols and contracts are analyzed theoretically,and the performance of the CSKC scheme is verified by experiment.Experimental results show that compared with the model accuracy in plaintext environment,the model accuracy loss of the CSKC scheme is controlled within 0.22%.

    Table and Figures | Reference | Related Articles | Metrics
    Integration of pattern search into the grasshopper optimization algorithm and its applications
    XIAO Yixin, LIU Sanyang
    Journal of Xidian University    2024, 51 (2): 137-156.   DOI: 10.19665/j.issn1001-2400.20230602
    Abstract84)   HTML2)    PDF(pc) (2873KB)(53)       Save

    In the process of applying intelligent optimization algorithms to solve complex optimization problems,balancing exploration and exploitation is of great significance in order to obtain optimal solutions.Therefore,this paper proposes a grasshopper optimization algorithm that integrates pattern search to address the limitations of traditional grasshopper optimization algorithm,such as low convergence accuracy,weak search capability,and susceptibility to local optima in handling complex optimization problems.First,a Sine chaotic mapping is introduced to initialize the positions of individual grasshopper population,reducing the probability of individual overlap and enhancing the diversity of the population in the early iterations.Second,the pattern search method is employed to perform local search for the currently found optimal targets in the population,thereby improving the convergence speed and optimization accuracy of the algorithm.Additionally,to avoid falling into local optima in the later stages of the algorithm,a reverse learning strategy based on the imaging of convex lenses is introduced.In the experimental section,a series of ablative experiments is conducted on the improved grasshopper algorithm to validate the independent effectiveness of each strategy,including the Sine chaotic mapping,pattern search,and reverse learning.Simulation experiments are performed on two sets of test functions,with the results analyzed using the Wilcoxon rank-sum test and Friedman test.Experimental results consistently demonstrate that the fusion mode search strategy improved grasshopper algorithm exhibits significant enhancements in both convergence speed and optimization accuracy.Furthermore,the application of the improved algorithm to mobile robot path planning further validates its effectiveness.

    Table and Figures | Reference | Related Articles | Metrics
    Research on the fast implementation method of Winograd transposed convolution
    LI Zhao,HUANG Chengcheng,HE Yizhi,SU Xiaojie
    Journal of Xidian University    2023, 50 (6): 148-160.   DOI: 10.19665/j.issn1001-2400.20230308
    Abstract83)   HTML6)    PDF(pc) (3859KB)(52)       Save

    The Winograd transposed convolution algorithm is a widely used convolution acceleration method for Field Programmable Gate Array(FPGA).It can solve the zero-padding problem of transposed convolution by performing the Winograd convolution after grouping.However,this method requires grouping operation on the input feature map and convolution kernel,and needs to reorganize the operation results to generate a complete output feature map.The complex calculation of element coordinates increases the difficulty of design.To solve the above problems,a Winograd transposed convolution method based on the unified transformation matrix is proposed,which uses the unified transformation matrix instead of grouping the input feature map and convolution kernel,and effectively solves the problems of overlapping summation,zero padding,convolution kernel inversion,decomposition and reorganization.And under the guidance of the Winograd transpose convolution method based on the unified transformation matrix,combined with data reuse,the double buffer and the pipeline,the design of a transposed convolution accelerator on FPGA is completed.The Gaussian-Poisson generative adversarial network is selected for experimental verification,and compared with the mainstream transposed convolution method.Experimental results show that the proposed method can effectively reduce the resource consumption and power consumption,and that the effective performance of the accelerator is 1.13x~23.92x higher than that of the existing transposed convolution methods.

    Table and Figures | Reference | Related Articles | Metrics