Loading...
Office

Table of Content

    20 October 2023 Volume 50 Issue 5
      
    Information and Communications Engineering & Computer Science and Technology
    Sea-surface multi-target tracking method aided by target returns features
    ZHANG Yichen,SHUI Penglang,LIAO Mo
    Journal of Xidian University. 2023, 50(5):  1-10.  doi:10.19665/j.issn1001-2400.20230201
    Abstract ( 486 )   HTML ( 103 )   PDF (3631KB) ( 366 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Due to the complex marine environment and the dense sea-surface targets,radars often face the tough tracking scenarios with a high false alarm rate and high target density.The measurement points originating from clutter and multiple closely-spaced targets appear densely in the detection space.The traditional tracking methods only use the position information,which cannot distinguish the specific source of the measurement well,resulting in serious degradation of the tracking performance.Target returns features can be used to solve the problem without increasing the complexity of the algorithm,but the generalization ability of the features is low.It is necessary to select suitable features according to different radar systems,working scenes and requirements.In this paper,the test statistic and the target radial velocity measurement are used as the target returns features,and the tracking equation is reconstructed so that features can be fully applied in all aspects of tracking.In addition,this paper adopts a "two-level" tracking process,which divides track and candidate track according to track quality.Experimental results show that the proposed method can achieve robust target tracking in the complex multi-target scenarios on the sea surface.

    Construction method of temporal correlation graph convolution network for traffic prediction
    ZHANG Kehan,LI Hongyan,LIU Wenhui,WANG Peng
    Journal of Xidian University. 2023, 50(5):  11-20.  doi:10.19665/j.issn1001-2400.20221103
    Abstract ( 248 )   HTML ( 28 )   PDF (4522KB) ( 199 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The existing traffic prediction methods in the virtual network of data centers characterize the correlation between links with difficulty,which leads to the difficulty in improving the accuracy of traffic prediction.Based on this,this paper proposes a Temporal Correlation Graph Convolutional neural Network (TC-GCN),which enables the representation of Temporal and spatial Correlation of the data center Network link traffic and improves the accuracy of traffic prediction.First,the graph convolutional neural network adjacency matrix with the time attribute is constructed to solve the problem of prediction deviation caused by traffic asynchronism between virtual network links,and to achieve accurate representation of link correlation.Second,a traffic prediction mechanism based on long/short window graph convolutional neural network weighting is designed,which adapts the smooth and fluctuating segments of the traffic sequence with a finite length long/short window,effectively avoids the vanishing gradient problem of the neural network,and improves the traffic prediction accuracy of the virtual network.Finally,an error weighting unit is designed to sum the prediction results of the long/short window graph convolutional neural network.The output of the network is the predicted value of link traffic.In order to ensure the practicability of the results,the simulation experiments of the proposed temporal correlation graph convolutional network are carried out based on the real data center network data.Experimental results show that the proposed method has a higher prediction accuracy than the traditional graph convolutional neural network traffic prediction method.

    Indoor pseudolite hybrid fingerprint positioning method
    LI Yaning,LI Hongsheng,YU Baoguo
    Journal of Xidian University. 2023, 50(5):  21-31.  doi:10.19665/j.issn1001-2400.20221102
    Abstract ( 144 )   HTML ( 23 )   PDF (4526KB) ( 105 )   Save
    Figures and Tables | References | Related Articles | Metrics

    At present,the interaction mechanism between the complex indoor environment and pseudolite signals has not been fundamentally resolved,and the stability,continuity,and accuracy of indoor positioning are still technical bottlenecks.Existing fingerprint positioning methods face the limitation that the collection workload is proportional to the positioning accuracy and positioning range,and have the disadvantage that the positioning cannot be completed without actual collection.In order to solve the above shortcomings of the existing methods,by combining the advantages of actual measurement,mathematical simulation and the artificial neural network,an indoor pseudolite hybrid fingerprint location method based on actual fingerprints,simulation fingerprints and the artificial neural network is proposed.First,the actual environment and signal transceiver are modeled.Second,both the simulated fingerprints generated by ray tracing simulation after conversion and the measured fingerprints are added to the input of the neural network,which expands the sample characteristics of the input data set of the original single measured fingerprints.Finally,the artificial neural network positioning model is jointly trained by the mixed fingerprints and then used for online positioning.By taking an airport environment as an example,it is proved that the hybrid method can improve the positioning accuracy of the sparsely collected fingerprint region,and that the root mean square error is 0.485 0 m,which is 54.7% lower than that of the traditional fingerprint positioning method.Preliminary positioning can also be completed in areas where no fingerprints are collected,and the root mean square positioning error is 1.123 7 m,which breaks through the limitations of traditional fingerprint location methods.

    Analysis of the spatial coverage area of linear distributed directional array beamforming
    DUAN Baiyu,YANG Jian,CHEN Cong,GUO Wenbo,LI Tong,SHAO Shihai
    Journal of Xidian University. 2023, 50(5):  32-43.  doi:10.19665/j.issn1001-2400.20230103
    Abstract ( 168 )   HTML ( 10 )   PDF (3773KB) ( 96 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The phased array antenna has been widely used in radar,communication and other fields because of its advantages of high gain,high reliability and controllability of the beam.Considering the limitations of the size,the deployment terrain and the power consumption of the phased array antenna,it is difficult for a single phased array antenna to meet the requirements in some complex scenes,especially in some scenarios such as the space-earth communication,reconnaissance and jamming,so it is necessary to deploy multiple phased array antennas in a distributed manner for cooperative beamforming to obtain a higher power gain than a single array antenna.The distributed directional array uses multiple distributed array nodes to realize a virtual antenna array,sending or receiving the same signal by adjusting the phase of each array element to form the directional beam.A calculation method is proposed based on the principle of array synthesis and spatial analytic geometry aiming at the problem of calculating the gain coverage area of the distributed directional array beam in a specific height plane.Analysis and simulation results show that the gain coverage area of the linear distributed directional array beam,including the main lobe and gate lobe beam gain coverage area,is strongly correlated with the elevation angle of the distributed array,the height of the target plane,the signal carrier frequency and the number of distributed nodes,while it is weakly correlated with the distance between the distributed nodes.The analytical value of the proposed method is consistent with the computer simulation value,which can provide a theoretical reference for the implementation of the long-distance high-power distributed array in engineering.

    Real-time power scheduling optimization strategy for 5G base stations considering energy sharing
    LIU Didi,YANG Yuhui,XIAO Jiawen,YANG Yifei,CHENG Pengpeng,ZHANG Quanjing
    Journal of Xidian University. 2023, 50(5):  44-53.  doi:10.19665/j.issn1001-2400.20230101
    Abstract ( 136 )   HTML ( 10 )   PDF (2647KB) ( 83 )   Save
    Figures and Tables | References | Related Articles | Metrics

    To alleviate the pressure on society's power supply caused by the huge energy consumption of the 5th generation mobile communication (5G) base stations,a joint distributed renewables,energy sharing and energy storage model is proposed with the objective of minimizing the long-term power purchase cost for network operators.A low-complexity real-time scheduling algorithm for energy sharing based on the Lyapunov optimization theory is proposed,taking into account the fact that the a priori statistical information on renewable energy output,energy demand and time-varying tariffs in smart grids are unknown.A virtual queue is constructed for the flexible electricity demand of the base stations in optimization problem solving.The energy storage time coupling constraint is transformed in the energy scheduling problem into a virtual queue stability problem.The proposed algorithm schedules the renewable energy output,energy storage,energy use and energy sharing of the base stations in real time,and minimizes the long-term cost of network operators purchasing power from the external grid on the premise of meeting the electricity demand of each base station.Theoretical analysis shows that all the proposed algorithm needs is to make real-time decisions based on the current system state and that the optimization result is infinitely close to the optimal value.Finally,simulation results show that the proposed algorithm can effectively reduce the power purchase cost of the network operator by 43.1% compared to the baseline greedy Algorithm One.

    Anti-occlusion PMBM tracking algorithm optimized by fuzzy inference
    LI Cuiyun,HENG Bowen,XIE Jinchi
    Journal of Xidian University. 2023, 50(5):  54-64.  doi:10.19665/j.issn1001-2400.20230401
    Abstract ( 116 )   HTML ( 13 )   PDF (6914KB) ( 83 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Target occlusion is a common problem in multiple extended target tracking.When the distance between targets is close or there are unknown obstacles within the scanning range of the sensor,the phenomenon of partial or complete occlusion of the target will occur,resulting in underestimation of the target quantity.Aiming at the problem that the existing Poisson multi-Bernoulli mixture(PMBM) filtering algorithms cannot perform stable tracking in occlusion scenarios,this paper proposes a GP-PMBM algorithm incorporating fuzzy inference.First,based on the random set target tracking framework,the corresponding extended target occlusion model is given according to different occlusion scenarios.On this basis,the state space of the GP-PMBM filter is expanded,and the influence of occlusion on the target state is taken into account in the filtering steps of the algorithm by adding variable detection probability.Finally,a fuzzy inference system that can estimate the target occlusion probability is constructed and combined with the GP-PMBM algorithm,and the accurate estimation of the target in occlusion scenarios is achieved with the help of the description ability of the fuzzy system and the good tracking performance of the PMBM filter.Simulation results show that the tracking performance of the proposed algorithm in target occlusion scenarios is better than that of the existing PMBM filtering algorithms.

    Traffic flow prediction method for integrating longitudinal and horizontal spatiotemporal characteristics
    HOU Yue,ZHENG Xin,HAN Chengyan
    Journal of Xidian University. 2023, 50(5):  65-74.  doi:10.19665/j.issn1001-2400.20221101
    Abstract ( 145 )   HTML ( 10 )   PDF (4496KB) ( 86 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the problems of insufficient mining of time delay characteristics and spatial flow characteristics of upstream and downstream traffic flow as well as insufficient consideration of spatiotemporal characteristics of lane-level traffic flow in existing urban road traffic flow prediction research,a traffic flow prediction method for integrating longitudinal and horizontal spatiotemporal characteristics is proposed.First,the method quantifies and eliminates the effect of spatial time lag between upstream and downstream traffic flow by calculating the delay time to enhance the spatiotemporal correlation of upstream and downstream traffic flow sequences.Then,the traffic flow with the elimination of spatial time lag is passed into the bidirectional long short-term memory network through the vector split data input method to capture the longitudinal transmission and backtracking bidirectional spatiotemporal relationship of upstream and downstream traffic flow.At the same time,the multiscale convolution group is used to mine the multi-time step horizontal spatiotemporal relationship between the traffic flows of each lane in the section to be predicted.Finally,the attention mechanism is used to dynamically fuse the longitudinal and horizontal spatiotemporal characteristics to obtain the predicted value.Experimental results show that by applying the proposed method in the single-step prediction experiment,the MAE and RMSE decrease by 15.26% and 13.83% respectively,and increase by 1.25% compared with conventional time series prediction model.In the medium and long-term multi-step prediction experiment,it is further proved that the proposed method can effectively mine the fine-grained spatiotemporal characteristics of longitudinal and horizontal traffic flow,and has a certain stability and universality.

    Nuclear segmentation method for thyroid carcinoma pathologic images based on boundary weighting
    HAN Bing,GAO Lu,GAO Xinbo,CHEN Weiming
    Journal of Xidian University. 2023, 50(5):  75-86.  doi:10.19665/j.issn1001-2400.20230501
    Abstract ( 126 )   HTML ( 9 )   PDF (5221KB) ( 85 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Thyroid cancer is one of the most rapidly growing malignancies among all solid cancers.Pathological diagnosis is the gold standard for doctors to diagnose tumors,and nuclear segmentation is a key step in the automatic analysis of pathological images.Aiming at the low segmentation performance of existing segmentation methods on the nuclear boundary of the cell nucleus in the thyroid carcinoma pathological image,we propose an improved U-Net method based on boundary weighting for nuclear segmentation.This method uses the designed boundary weighting module,which can make the segmentation network pay more attention to the boundary of the nuclear.At the same time,in order to avoid the proposed network paying too much attention to the boundary and ignoring the main part of the nucleus,which leads to the failure for some lightly stained nuclei segmentation,we design a segmentation network to enhance the foreground area and suppresses the background area in the upsampling stage.In addition,we build a dataset for nuclear segmentation of thyroid carcinoma pathologic images named VIP-TCHis-Seg dataset.Our method achieves the Dice coefficient(Dice) of 85.26% and the pixel accuracy(PA) of 95.89% on self-built TCHis-Seg dataset,and achieves the Dice coefficient(Dice) of 81.03% and the pixel accuracy(PA) of 94.63% on common dataset MoNuSeg.Experimental results show that our method can achieve the best performance on both Dice and PA as well as effectively improve the segmentation accuracy of the network at the boundary compared with other methods.

    Cloth-changing person re-identification paradigm based on domain augmentation and adaptation
    ZHANG Peixu,HU Guanyu,YANG Xinyu
    Journal of Xidian University. 2023, 50(5):  87-94.  doi:10.19665/j.issn1001-2400.20221106
    Abstract ( 115 )   HTML ( 10 )   PDF (1901KB) ( 85 )   Save
    Figures and Tables | References | Related Articles | Metrics

    In order to solve the influence of the clothing change on the model’s recognition accuracy of the personal identity,a clothes-changing person re-identification paradigm based on domain augmentation and adaptation is proposed,which enables the model to learn general robust identity representation features in different domains.First,a clothing semantic-aware domain data enhancement method is designed based on the semantic information of the human body,which changes the color of sample clothes without changing the identity of the target person to fill the lack of domain diversity in the data; second,a multi-positive class domain adaptive loss function is designed,which assigns differential weights to the multi-positive class data losses according to the different contributions made by different domain data in the model training,forcing the model to focus on the learning of generic identity features of the samples.Experiments demonstrate that the method achieves 59.5%,60.0%,and 88.0%,84.5% of Rank-1 and mAP on two clothing change datasets,PRCC and CCVID,without affecting the accuracy of non-clothing person re-identification.Compared with other methods,this method has a higher accuracy and stronger robustness and significantly improves the model’s ability to recognize persons.

    Point set registration optimization algorithm using spatial clustering and structural features
    HU Xin,XIANG Diyuan,QIN Hao,XIAO Jian
    Journal of Xidian University. 2023, 50(5):  95-106.  doi:10.19665/j.issn1001-2400.20230411
    Abstract ( 91 )   HTML ( 7 )   PDF (6289KB) ( 67 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The existence of noise,non-rigid deformation and mis-matching in point set registration results in the difficulty of solving nonlinear optimal space transformation.This paper introduces local constraints and proposes a point set registration optimization algorithm using spatial distance clustering and local structural features(PR-SDCLS).First,the motion consistency clustering subset and outlier clustering subset are constructed by using the point set space distance matrix;Then,the Gaussian mixture model is used to fit the motion consistency cluster subset,and the mixing coefficient considering global and local features is obtained by fusing the shape context feature descriptor and weighted spatial distance.Finally,the maximum expectation algorithm is used to complete the parameter estimation,and the non-rigid point set registration model of the Gaussian mixture model is realized.In order to improve the efficiency of the algorithm,the model transformation uses the reproducing kernel Hilbert space model,and uses the kernel approximation strategy.Experimental results show that the algorithm has a good registration effect and robustness in the face of a large number of outliers on non-rigid data sets involving different types of data degradation(deformation,noise,outliers,occlusion and rotation),and the mean value of registration average error is reduced by 42.053 8% on the basis of classic and advanced algorithms.

    Cyberspace Security
    Cause-effectgraph enhanced APT attack detection algorithm
    ZHU Guangming,LU Zijie,FENG Jiawei,ZHANG Xiangdong,ZHANG Fengjun,NIU Zuoyuan,ZHANG Liang
    Journal of Xidian University. 2023, 50(5):  107-117.  doi:10.19665/j.issn1001-2400.20221105
    Abstract ( 164 )   HTML ( 13 )   PDF (2814KB) ( 88 )   Save
    Figures and Tables | References | Related Articles | Metrics

    With the development of information technology,the cyberspace also derives an increasing number of security risks and threats.There are more and more advanced cyberattacks,with the Advanced Persistent Threat(APT) attack being one of the most sophisticated attacks and commonly adopted by modern attackers.Traditional statistical or machine learning detection methods based on network flow are challenging in coping with complicated and persistent APT-style attacks.Aiming to overcome the difficulty in detecting APT attacks,a cause-effect graph enhanced APT attack detection algorithm is proposed to model the interaction process between network nodes at different times and identify malicious packets in the attack process in network flows.First,the causal-effect graph is used to model the network packet sequences,and the data flows between IP nodes in the network are associated to establish the context sequence of attack and non-attack behaviors.Then,the sequence data are normalized,and the deep learning model based on the long short-term memory network(LSTM) is used for sequence classification.Finally,based on the sequence classification results,the original packets are screened for malignancy.A new dataset is constructed based on the DAPT 2020 dataset,with the proposed algorithm’s ROC-AUC indicator on the test set reaching 0.948.Experimental results demonstrate that the attack detection algorithm based on causal-effect graph sequences has obvious advantages and is a feasible algorithm for detecting APT attack network flow.

    Double adaptive image watermarking algorithm based on regional edge features
    GUO Na,HUANG Ying,NIU Baoning,LAN Fangpeng,NIU Zhixian,GAO Zhuojie
    Journal of Xidian University. 2023, 50(5):  118-131.  doi:10.19665/j.issn1001-2400.20221107
    Abstract ( 128 )   HTML ( 7 )   PDF (2865KB) ( 76 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Local image watermarking is a hotspot technology which embeds watermark in a partial image and can resist cropping attacks.Existing local watermarking technology locates the embedded region by the feature points,which may be offset when attacks occur.Due to the obvious difference of the pixel near edges,if the region contains many edges,the offset will lead to excessive regional pixel error and fail the watermark extraction.To solve this problem,a double adaptive image watermarking algorithm based on regional edge features is proposed.First,a method to determine the embedding region is proposed,which uses a sliding window to choose embedding regions with few edges and good hiding ability by taking image features such as the edge,texture,etc.into account.Second,a double adaptive watermark embedding scheme is proposed,which is divided into blocks,with each block embedding 1-bit watermark information by modifying the pixel value.In the first coarse-grained adaptive scheme,the function between the embedding parameter and the number of edge pixels is established through linear regression analysis,and the embedding strength is adaptively adjusted by the function to enhance the robustness of blocks containing edges.In the second fine-grained adaptive scheme,the gaussian window is used to adaptively adjust the modifications of different pixels to improve the imperceptibility of the watermark.Experiments show that the proposed algorithm can effectively enhance the robustness of the watermark at the edge,and improve its imperceptibility.

    Improved short-signature based cloud data audit scheme
    CUI Yuanyou,WANG Xu’an,LANG Xun,TU Zheng,SU Yunxuan
    Journal of Xidian University. 2023, 50(5):  132-141.  doi:10.19665/j.issn1001-2400.20230107
    Abstract ( 98 )   HTML ( 6 )   PDF (1741KB) ( 60 )   Save
    Figures and Tables | References | Related Articles | Metrics

    With the development of the Internet of Things,Cloud storage has experienced an explosive growth.Effective verification of the integrity of data stored on the Cloud storage service providers(CSP) has become an important issue.In order to solve the problem that the existing data integrity audit scheme based on the BLS short signature is inefficient,ZHU et al.designed a data integrity audit scheme based on the ZSS short signature in 2019.However,this paper points out that the proof generated by ZHU et al.'s scheme in the challenge phase is incorrect and can be subjected to replay attacks or attacked by using a bilinear map,so as to pass the audit of a third party auditor(TPA).Then,this paper proposes an improved cloud audit scheme based on the short signature by improving the calculation method of proof in the challenge stage and optimizing the equations used by the third party auditor in the verification stage for verifying proof.This paper proves the correctness of the improved scheme,compensates for the shortcomings in the original scheme,and analyzes the security of the scheme.The improved scheme not only can make attackers including the third party auditor unable to recover users’ data,but also can resist replay attacks and forgery attacks of attackers including malicious cloud storage service providers.Through numerical analysis,it is found that the computational cost did not change much,and that the communication cost decreased,thus providing a better computational accuracy than the original scheme.

    Verifiable traceable electronic license sharing deposit scheme
    WANG Lindong,TIAN Youliang,YANG Kedi,XIAO Man,XIONG Jinbo
    Journal of Xidian University. 2023, 50(5):  142-155.  doi:10.19665/j.issn1001-2400.20230408
    Abstract ( 86 )   HTML ( 8 )   PDF (4017KB) ( 67 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Verifiability and traceability are important challenges to the sharing and retention of electronic licenses.Traditional methods only ensure the verifiability of the issuer through electronic signature technology,but the verifiability of the holder and the depositor and the traceability of the license leakage are difficult to guarantee.Therefore,a verifiable and traceable electronic license sharing deposit scheme is proposed.First,aiming at the problem of unauthorized use of electronic licenses and the inability to trace after leakage,a model of the electronic license sharing and deposit system is constructed.Second,aiming at the problem of watermark information loss in the traditional strong robust watermarking algorithm,the existing strong robust watermarking algorithm is improved based on the BCH code,so as to realize the error correction of watermark information distortion.Finally,in order to realize the verifiability of the issuer,the holder and the depositor as well as the efficient traceability after the leakage of the electronic license,the verifiable and traceable electronic license model is constructed by combining the proposed robust watermark and reversible information hiding technology,on the basis of which the electronic license sharing and deposit protocol is designed to ensure the real authorized use of the license and the efficient traceability after the leakage.The analysis of security and efficiency shows that this scheme can achieve an efficient traceability after license leakage and has a good anti-collusion attack detection ability under the premise of ensuring the verifiability of the three parties,and that its execution time consumption is low enough to meet the needs of practical applications.

    Anti-collusion attack image retrieval privacy protection scheme for ASPE
    CAI Ying,ZHANG Meng,LI Xin,ZHANG Yu,FAN Yanfang
    Journal of Xidian University. 2023, 50(5):  156-165.  doi:10.19665/j.issn1001-2400.20230406
    Abstract ( 140 )   HTML ( 9 )   PDF (1886KB) ( 74 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The existing algorithm based on Asymmetric Scalar-Product-Preserving Encryption (ASPE) realizes privacy protection in image retrieval under cloud computing.But due to untrustworthy cloud service providers and retrieval users during retrieval and the existence of an external adversary,it cannot resist the collusion attack of malicious users and cloud servers,which may lead to the leakage of image data containing sensitive information.Aiming at multi-user scenarios,an Anti-collusion attack image retrieval privacy protection scheme for ASPE is proposed.First,the scheme uses proxy re-encryption to solve the problem of image key leakage caused by transmitting private keys to untrusted users.Second,the feature key leakage problem between the cloud service provider and the retrieval user due to collusion attacks is solved by adding a diagonal matrix encryption at the client side.Finally,linear discriminant analysis is used to solve the problem of retrieval accuracy drop caused by dimensionality reduction when locality sensitive hashing is used to construct an index.The security analysis proves that the scheme is safe and effective and that it can not only resist collusion attacks from cloud service providers and untrusted users,ciphertext-only attacks,known background attacks and known plaintext attacks,but also realize protection of images and private keys during the process.Experimental results show that under the premise of protecting image privacy and ensuring retrieval efficiency,the retrieval accuracy of the proposed scheme in the ciphertext domain and that in the plaintext domain are only about 2% different.

    Federated learning scheme for privacy-preserving of medical data
    WANG Bo,LI Hongtao,WANG Jie,GUO Yina
    Journal of Xidian University. 2023, 50(5):  166-177.  doi:10.19665/j.issn1001-2400.20230202
    Abstract ( 223 )   HTML ( 8 )   PDF (4010KB) ( 96 )   Save
    Figures and Tables | References | Related Articles | Metrics

    As an emerging training model with neural networks,federated learning has received widespread attention due to its ability to carry out model training on the premise of protecting user data privacy.However,since adversaries can track and derive participants’ privacy from the shared gradients,federated learning is still exposed to various security and privacy threats.Aiming at the privacy leakage problem of medical data in the process of federated learning,a secure and privacy-preserving medical data federated learning architecture is proposed based on Paillier homomorphic encryption technology (HEFLPS).First,the shared training model of the client is encrypted with Paillier homomorphic encryption technology to ensure the security and privacy of the training model,and a zero-knowledge proof identity authentication module is designed to ensure the credibility of the training members;second,the disconnected or unresponsive users are temporarily eliminated by constructing a message confirmation mechanism on the server side,which reduces the waiting time of the server and reduces the communication cost.Experimental results show that the proposed mechanism has high model accuracy,low communication delay and a certain scalability while achieving privacy protection.

    Efficient federated learning privacy protection scheme
    SONG Cheng,CHENG Daochen,PENG Weiping
    Journal of Xidian University. 2023, 50(5):  178-187.  doi:10.19665/j.issn1001-2400.20230403
    Abstract ( 270 )   HTML ( 10 )   PDF (1908KB) ( 98 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Federated learning allows clients to jointly train models with only shared gradients,rather than directly feeding the training data to the server.Although federated learning avoids exposing data directly to third parties and plays a certain role in protecting data,research shows that the transmission gradient in federated learning scenarios will still lead to the disclosure of private information.However,the computing and communication overhead brought by the encryption scheme in the training process will affect the training efficiency,and it is difficult to apply to resource-constrained environments.Aiming at the security and efficiency problems of privacy protection schemes in current federated learning,a safe and efficient privacy protection scheme for federated learning is proposed by combining homomorphic encryption and compression techniques.The homomorphic encryption algorithm is optimized to ensure the security of the scheme,reduce the number of operations and improve the efficiency of operations.At the same time,a gradient filtering compression algorithm is designed to filter out the local updates that are not related to the convergence trend of the global model,and the update parameters are quantized by a computationally negligible compression operator,which ensures the accuracy of the model and increases the communication efficiency.The security analysis shows that the scheme satisfies the security characteristics such as indistinguishability,data privacy and model security.Experimental results show that the proposed scheme has not only higher model accuracy,but also obvious advantages over the existing schemes in terms of communication cost and calculation cost.

    Privacy preserving multi-classification LR scheme for data quality
    CAO Laicheng,WU Wentao,FENG Tao,GUO Xian
    Journal of Xidian University. 2023, 50(5):  188-198.  doi:10.19665/j.issn1001-2400.20230601
    Abstract ( 78 )   HTML ( 10 )   PDF (1676KB) ( 66 )   Save
    Figures and Tables | References | Related Articles | Metrics

    In order to protect the privacy of the multi-classification logistic regression model in machine learning,ensure the quality of training data,and reduce the computing and communication costs,a privacy preserving multi-classification logistic regressions cheme for data quality is proposed.First,based on the homomorphic encryption for arithmetic of approximate numbers technology,the batch processing technology and single-instruction multi-data mechanism are used to package multiple messages into one ciphertext,and the encrypted vector is safely shifted into the ciphertext corresponding to the plaintext vector.Second,the binary logistic regression model is extended to multiple classifications by training multiple classifiers using the "One vs Rest" disassembly strategy.Finally,the training data set is divided into several matrices of a fixed size,which still retain the complete data structure of the sample information.The fixed Hessian method is used to optimize the model parameters so that they can be used in any case and keep the parameters private.during model training.The scheme can reduce data sparsity and ensure data quality.The security analysis shows that the training model and user data information cannot be leaked in the whole process.Meanwhile,the experiment shows that the training accuracy of this scheme is greatly improved compared with the existing scheme and almost the same as that obtained by training unencrypted data,and that the scheme has a lower computing cost.

    COLLATE:towards the integrity of control-related data
    DENG Yingchuan,ZHANG Tong,LIU Weijie,WANG Lina
    Journal of Xidian University. 2023, 50(5):  199-211.  doi:10.19665/j.issn1001-2400.20230106
    Abstract ( 94 )   HTML ( 5 )   PDF (3156KB) ( 59 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Programs written in C/C++ may contain bugs that can be exploited to subvert the control flow.Existing control-flow hijacking mitigations validate the indirect control-flow transfer targets,or guarantee the integrity of code pointers.However,attackers can still overwrite the dependencies of function pointers,bending indirect control-flow trans-fers(ICTs) to valid but unexpected targets.We introduce the control-related data integrity(COLLATE) to guarantee the integrity of function pointers and their dependencies.The dependencies determine the potential data-flow between function pointers definition and ICTs.The COLLATE identifies function pointers,and collects their dependencies with the inter-procedure static taint analysis.Moreover,the COLLATE allocates control-related data on a hardware-protected memory domain MS to prevent unauthorized modifications.We evaluate the overhead of the COLLATE on SPEC CPU 2006 benchmarks and Nginx.Also,we evaluate its effectiveness on three real-world exploits and one test suite for vtable pointer overwrites.The evaluation results show that the COLLATE successfully detects all attacks,and introduces a 10.2% performance overhead on average for the C/C++ benchmark and 6.8% for Nginx,which is acceptable.Experiments prove that the COLLATE is effective and practical.

    Random chunks attachment strategy based secure deduplication for cloud data
    LIN Genghao,ZHOU Ziji,TANG Xin,ZHOU Yiteng,ZHONG Yuqi,QI Tianyang
    Journal of Xidian University. 2023, 50(5):  212-228.  doi:10.19665/j.issn1001-2400.20230503
    Abstract ( 103 )   HTML ( 9 )   PDF (5198KB) ( 73 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Source based deduplication prevents subsequent users from uploading the same file by returning a deterministic response,which greatly saves the network bandwidth and storage overhead.However,the deterministic response inevitably introduces side channel attacks.Once the subsequent uploading is not needed,an attacker can easily steal the existent privacy of the target file in cloud storage.To resist side channel attacks,various kinds of defense schemes such as adding trusted gateways,setting trigger thresholds,confusing response values,and so on are proposed.However,these methods suffer from the problems of high deployment costs,high startup costs and the difficulty in resisting random chunks generation attack and learn remaining information attack.Thus,we propose a novel secure deduplication scheme,which utilizes the random chunks attachment strategy to achieve obfuscation in response.Specifically,we first add a certain number of chunks with the unknown existent status at the end of the request to blur the existent status of the original requested ones,and then reduce the probability of returning a lower boundary value in response by scrambling strategy.Finally,the deduplication response is generated with the help of the newly designed response table.Security analysis and experimental results show that,compared with the existing works,our scheme significantly improve the security at the expense of just a little extra overhead.