Loading...
Office

Table of Content

    20 December 2024 Volume 51 Issue 6
      
    Information and Communications Engineering
    Non-cooperative multi-target distributed localization algorithm
    LI Wengang, WANG Qianxiong, HUANG Jun, CI Guohui, ZHAI Xiaotong
    Journal of Xidian University. 2024, 51(6):  1-9.  doi:10.19665/j.issn1001-2400.20241003
    Abstract ( 130 )   HTML ( 11 )   PDF (1718KB) ( 420 )   Save
    Figures and Tables | References | Related Articles | Metrics

    With the vigorous development of mobile communication,the detection of long-distance targets in modern warfare pays more attention to concealment and security.In this context,the research on non-cooperative multi-target positioning by passive receiving signals has attracted wide attention.However,passive localization of multiple non-cooperative targets is greatly affected by the matching relationship between signal data,which influences the accuracy of target position estimation.To address the complexity and diversity of data matching relationships,this paper proposes a distributed data matching localization algorithm for non-cooperative multi-targets.The algorithm obtains angle and intensity measurements from signal calculations,constructs measurement data pairs,and selects candidate reference points to reduce measurement data matching complexity.It then constructs a cost function through the probability analysis of measurement data pair matching,selects the measurement information of the minimum cost point as the best match for the target,and calculates the position of each target by the clustering analysis of final proxy point sets.Simulation results show that the proposed algorithm can increase the target matching success rate by over 26.3% and the computational speed by 16.8% in the same scenario,and can achieve fast and accurate multi-target localization with an average positioning error of about 17.41 m in a region of 6 km×4 km.

    Topology awareness based multi-service routing for the software-defined satellite network
    LU Xueyu, WEI Wenting, FU Liying, LIU Lizhe, WANG Dongdong
    Journal of Xidian University. 2024, 51(6):  10-24.  doi:10.19665/j.issn1001-2400.20240904
    Abstract ( 118 )   HTML ( 5 )   PDF (2670KB) ( 36 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The satellite network is the key to building a wide-coverage,giant-connection,omni-directional and highly reliable space-ground integrated information network.Aiming at the problems of complex satellite network structure and diversified service requirements,a multi-service routing algorithm for the software-defined satellite network based on topology perception is proposed.To realize flexible and adaptive control,the software-defined satellite network architecture with satellite-ground coordination is constructed.In the two-layer centralized control plane,the high-orbit satellite centrally senses the topology information on the satellite network in the data plane,and is responsible for extracting the node and link attributes to portray the reliability of the inter-satellite links,on the basis of which a topology-aware multi-service on-demand routing algorithm is designed by constructing a topology-aware model,portraying dynamic link attributes,and constructing a multi-service on-demand routing model.For the three types of services,namely delay,bandwidth,and reliability-sensitive services,the K shortest path is selected as the solution space.The link resources are reasonably allocated by combining the path reliability perception with the demand preferences of the services to realize the dynamic topology and the multi-service adaption.Simulation results show that the proposed routing algorithm effectively reduces the end-to-end delay and packet loss ratio,and significantly improves service satisfaction.

    Dual attention pedestrian detector for occlusion scenario based on feature calibration
    TANG Shuyuan, ZHOU Yiqing, LI Jintao, LIU Chang, SHI Jinglin
    Journal of Xidian University. 2024, 51(6):  25-39.  doi:10.19665/j.issn1001-2400.20240909
    Abstract ( 82 )   HTML ( 4 )   PDF (2998KB) ( 20 )   Save
    Figures and Tables | References | Related Articles | Metrics

    One of the major challenges faced by pedestrian detection technology based on computer vision is the issue of occlusion,including inter-class occlusion caused by objects in the natural environment and intra-class occlusion between pedestrians.These intertwined occlusion patterns limit the performance of pedestrian detectors.To address this problem,this paper proposes a dual-attention detection network based on feature calibration within the standard Faster R-CNN pedestrian detection framework.The network first generates attention masks through supervised learning to represent the spatial features of pedestrians in the image.These masks are then fused with backbone features and combined with a channel attention mechanism to calibrate pedestrian regions.This approach enhances the visibility of pedestrian regions while reducing the impact of occluded parts on classification and regression.Additionally,a non-uniform sampling strategy based on occlusion rates is introduced,targeting hard examples to allow the network to better learn complex occlusion patterns.Experimental results demonstrate that in comparison with standard pedestrian detectors,the proposed method achieves a 2.5% performance improvement on the reasonable occlusion subset of the CityPersons validation dataset.

    Research onpeak-to-average power ratio reduction in sensing-integrated OFDM systems
    XIAN Yongju, ZHAO Runhao, XING Zhitong, LI Yun
    Journal of Xidian University. 2024, 51(6):  40-51.  doi:10.19665/j.issn1001-2400.20240914
    Abstract ( 86 )   HTML ( 2 )   PDF (2684KB) ( 29 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Orthogonal Frequency Division Multiplexing as an alternative waveform for future Integrated Sensing and Communication signals is widely studied.However,the issue of high peak-to-average power ratio in orthogonal frequency division multiplexing signals not only leads to a decrease in communication performance after passing through high-power amplifiers,but also results in suboptimal sensing performance.In this paper,a companding transform is proposed to alter the probability density distribution of signal amplitudes,aiming to reduce the PAPR of integrated OFDM signals while maintaining excellent communication and sensing performance.Simulation analysis shows that,under roughly the same peak-to-average power ratio conditions,in 64-QAM modulation,the communication bit error rate of the integrated communication and perception system containing the proposed peak-to-average power ratio reduction algorithm in this paper is respectively 0.1 dB,0.29 dB,0.69 dB,and 1.1dB lower than other algorithms after passing through a solid-state power amplifier and an additive white Gaussian noise channel when the BER is 10-5.In terms of sensing performance,the root mean square error(RMSE) of both velocity estimation and range estimation for the integrated communication and sensing system incorporating the proposed companding algorithm in this paper is superior to that of the OFDM integrated communication and sensing system without the companding algorithm.

    Optimization method for Raptor-like multi-rate QC-LDPC codes
    LI Hua’an, WANG Wenzhen, XU Hengzhou, CHEN Chao, BAI Baoming
    Journal of Xidian University. 2024, 51(6):  52-59.  doi:10.19665/j.issn1001-2400.20240912
    Abstract ( 95 )   HTML ( 3 )   PDF (1671KB) ( 14 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Design optimization of high-performance low-density parity-check(LDPC) codes for the future communication network can reduce to the design category of LDPC codes with a lower description complexity,such as multi-rate LDPC(MR-LDPC) codes with constant codeword length.By combining the structural property of both 5G LDPC codes and MR codes,Raptor-like multi-rate quasi-cyclic LDPC(RL-MR-QC-LDPC) codes provide a promising scheme for the coding method fusion design of the future ground network and other communication systems.Since the construction involves algebraic theory,the design/storage complexity of algebraic RL-MR-QC-LDPC codes is very low,but since the algebraically constructed matrix is too structured,the performance improvement is not obvious.Therefore,this paper presents an optimization method for the RL-MR-QC-LDPC codes by using what is called the splitting-combining strategy.Numerical results show that in comparison to the original codes,the optimized codes can obtain a better overall performance.The proposed method can be used directly to optimize the RL-MR-QC-LDPC codes and can also be a post-processing method to improve the algebraic RL-MR-QC-LDPC codes so that it can help derive the coding method fusion design of the future ground network and other communication systems.

    Multi-objective optimized WSN coverage algorithm integrating VFA and ISSA
    YU Xiuwu, JIN Shiqi
    Journal of Xidian University. 2024, 51(6):  60-72.  doi:10.19665/j.issn1001-2400.20240903
    Abstract ( 104 )   HTML ( 2 )   PDF (3395KB) ( 20 )   Save
    Figures and Tables | References | Related Articles | Metrics

    An improved sparrow search algorithm led by virtual force is put forward,aiming to address the issues of a low coverage rate,large coverage redundancy,and long node moving distance in the process of monitoring the target region of wireless sensor networks.To begin with,the population is initialized using the Tent chaotic map in order to improve the population’s diversity.Second,a virtual force algorithm is presented to direct the sparrow population’s discoverers to search for better positions.That is to say,interaction forces between nodes,boundaries,and barriers might assist the algorithm in more wide exploration by guiding discoverers into a more beneficial place.To keep the algorithm out of the local optimum dilemma,the Levy flight disturbance approach is then used to optimize the followers' position.Ultimately,the method of random reverse learning is utilized to improve the position of the global optimal individual,thus allowing for local optimization to occur in the nearby area,and improving the algorithm’s population diversity and convergence speed.Experimental findings demonstrate that the proposed algorithm can enhance the coverage rate while reducing the node’s moving distance and achieving a more uniform distribution when compared to other traditional algorithms.Furthermore,the proposed algorithm incorporates the virtual force algorithm’s effective obstacle avoidance capabilities with the sparrow search algorithm’s potent optimization seeking capabilities in the obstacle-filled monitoring area,which makes sure that the nodes successfully avoid impediments while allowing the network to deploy the nodes' placements in a reasonable manner.The approach is hence more useful for real-world applications.

    Fusion classification network for hyperspectral and LiDAR eature coupling modeling
    XU Haitao, LIU Yuzhe, YAN Xinyi, LI Jiaojiao, XUE Changbin
    Journal of Xidian University. 2024, 51(6):  73-83.  doi:10.19665/j.issn1001-2400.20240702
    Abstract ( 76 )   HTML ( 4 )   PDF (3287KB) ( 24 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Hyperspectral image data are rich in spectral information,while LiDAR data can provide detailed elevation information.In the field of remote sensing processing,the two are usually fused to improve the interpretation accuracy.However,when performing information interaction between different data sources,existing methods do not fully exploit the advantages of multi-source data fusion.Therefore,this paper presents a dual-branch fusion classification algorithm combining convolutional networks and Transformers,leveraging the advantages of both types of data.In the feature extraction stage,a cross-modal feature coupling module is designed,which improves the consistency of feature extraction by the backbone network through channel feature interaction and spatial feature interaction,enhancing the complementarity between data and the semantic relevance of features.In the feature fusion stage,a bilateral attention feature fusion module is designed which adopts a cross-attention mechanism to perform bidirectional feature fusion operations on hyperspectral data and LiDAR data,enhancing complementarity between data,reducing redundant information,and ensuring that the optimized features can be efficiently fused and input into the classifier so as to significantly improve the accuracy and robustness of the classification network.Experimental results show that compared to existing fusion classification algorithms,the algorithm designed in this paper demonstrates more advanced results on the Houston2013 and TRENTO datasets,with average classification accuracy improvements of 1.43% and 4.81% respectively,indicating that this algorithm can significantly enhance the network’s ability to distinguish spatial and spectral features of hyperspectral images,and improve the fusion classification accuracy.

    Construction of two classes of multi-permutation array codes
    WANG Dan, SUN Rong, HAN Hui
    Journal of Xidian University. 2024, 51(6):  84-90.  doi:10.19665/j.issn1001-2400.20240802
    Abstract ( 72 )   HTML ( 1 )   PDF (567KB) ( 18 )   Save
    References | Related Articles | Metrics

    The flash memory with a high storage density and good reliability is a dominant nonvolatile memory technology.Such noises as read/write interference,inter-cell interference,and charge leakage may cause deletion errors in flash memory data that cannot be read correctly.Error-correcting coding techniques such as permutation codes/multi-permutation codes under the rank-modulation scheme can overcome deletion/insertion errors.Moreover,multi-permutation codes can achieve a higher information rate compared with permutation codes.In recent years,only the construction and decoding methods of binary codes correcting single or multiple criss-cross insertion/deletion errors have been investigated.However,there are no studies of multi-permutation array codes correcting criss-cross insertion/deletion errors.In order to solve this problem,by using the interleaving technique of permutations and the idea of repetition codes,two constructions of balanced multi-permutation array codes capable of correcting the single criss-cross deletion error are proposed with their decoding methods included in proofs.The first construction is based on the parity interleaving technique of permutations to determine the position of the deleted column in the codeword array,while the second construction is based on the Levenshtein permutation code to determine the position of the deleted column in the codeword array.Moreover,the balanced multi-permutation array codes obtained by the second construction can achieve a higher rate compared with the first construction.The correctness of our constructions and their decoding methods is verified with examples.

    PINN-based method for solving DC operating points in nonlinear circuits
    CAI Gushun, LIU Jinhui, ZHANG Xindan, HUANG Zhao, WANG Quan
    Journal of Xidian University. 2024, 51(6):  91-103.  doi:10.19665/j.issn1001-2400.20241110
    Abstract ( 95 )   HTML ( 2 )   PDF (2506KB) ( 15 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The Physical-informed Neural Network(PINN) is a new type of deep learning model,but it cannot effectively solve the problem that high-order nonlinear equations are difficult to solve in circuit DC analysis.To address this problem,this paper proposes a novel and PINN-based learning simulation model to achieve an efficient simulation analysis and accurate solutions of DC operating points in nonlinear circuits.Specifically,the nonlinear device IV characteristic equation and modified node analysis(MNA) equation are simultaneously exploited as a regularization term for the loss function,and the node admittance matrix and independent power supply values are directly substituted into the PINN as prior knowledge for training to obtain the final DC operating point learning simulation model,thereby effectively predicting the node voltage value and completing the nonlinear solution of different device models.To validate the proposed PINN learning model,we conduct experiments on three typical nonlinear devices.The simulation results show that the maximum relative error of the proposed PINN learning model is less than 4.30% compared with the theoretical values,thus effectively solving the problem that the traditional numerical algorithms converge with difficulty when solving the DC operating points in nonlinear circuits.As compared with Gmin and source-stepping methods,the average prediction accuracy of the proposed PINN model increases by 0.11% and 0.23%,respectively.This illustrates that our method has a higher learning efficiency and a good stability while requiring fewer samples.

    Computer Science and Technology & Cyberspace Security
    Cluster-oriented semi-online task scheduling method in the edge computing platform
    HAN Jiaxi, ZHAO Hui, FENG Nanzhi, WANG Jing, WAN Bo, WANG Quan
    Journal of Xidian University. 2024, 51(6):  104-116.  doi:10.19665/j.issn1001-2400.20241004
    Abstract ( 85 )   HTML ( 2 )   PDF (2342KB) ( 20 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The existing task scheduling methods for edge computing do not consider the problem of uncertain performance of edge nodes caused by network delay,and cannot adapt to the delay sensitive edge computing platform with uncertain node performance.To solve this problem,this paper proposes a cluster-oriented semi-online scheduling method for delay-sensitive edge computing platform.First,considering the nodes with uncertain performance caused by network delay,a performance uncertainty metric is designed to represent the degree of performance certainty of edge nodes.This metric provides extra pieces of information for the semi-online scheduling algorithm.Second,a dual-objective QoS guarantee model and a task completion time optimization model are proposed to establish a dual-objective optimization model for task scheduling,which focuses on guaranteeing QoS and minimizing makespan.Third,to address the NP-hard problem of task scheduling,a mapping-based semi-online task scheduling algorithm(MSSA) is proposed which divides the service area range based on the performance uncertainty metric and user location,establishes a cluster-oriented edge computing platform model,and dynamically adjusts cluster capacity based on load changes,thus enabling efficient semi-online task scheduling.Finally,by using trace data from a real edge computing platform,simulation experiments are conducted to compare the proposed algorithm with other methods.Experimental results demonstrate that the proposed algorithm can reduce the task completion time by 26% and improves the QoS guarantee by 19%.

    Overview of deep sentence-level entity relation extraction
    ZHAO Congjian, JIAO Yiyuan, LI Yanni
    Journal of Xidian University. 2024, 51(6):  117-131.  doi:10.19665/j.issn1001-2400.20240311
    Abstract ( 107 )   HTML ( 6 )   PDF (3201KB) ( 28 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Entity relation extraction at statement level(RE) refers to the extraction of semantic relationship between an entity pair from a given statement.It is an important basis for the construction of knowledge graph,natural language processing(NLP),intelligent question answering,Web search and other applications in artificial intelligence(AI),and it is the most cutting-edge basic hot research issue in AI.With the successful application of deep neural networks(DNNs),a variety of RE algorithms based on DNNs have emerged.In recent years,with the requirement of continuous processing and understanding of text information,some deep continuous of entity relation extraction(CRE) algorithms by combining entity relationship extraction and continual learning(CL) have emerged.This kind of algorithms can enable the model to carry out sequential RE of multiple tasks sustainably and efficiently without forgetting the learned knowledge of old tasks.In this paper,various representative deep RE and CRE methods in recent years are surveyed from their deep network model,algorithm framework and performance characteristics,and the research development trends of the RE and CRE are pointed out.We sincerely hope that the extensive survey will inspire more good ideas on the research of the RE and CRE.

    Algorithm for byzantine fault-tolerant consensus to support dynamic feedback decision-making
    ZHAI Sheping, CAO Yongqiang, YANG Rui, ZHANG Ruiting
    Journal of Xidian University. 2024, 51(6):  132-148.  doi:10.19665/j.issn1001-2400.20240902
    Abstract ( 107 )   HTML ( 2 )   PDF (1993KB) ( 19 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The blockchain is popular as a distributed ledger that can record encryptions,ensuring the consistency of transactions in the ledger through consensus algorithms across a network of untrusted nodes.The broadcasting process of voting-based consensus protocols requires a large number of network forwards,with a huge communication overhead that severely affects the on-chain performance.With the increase in the number of nodes,the performance deteriorates dramatically,and the scalability of large-scale nodes is severely constrained.To address the above problems,the consensus process is treated as an optimization problem,and a Byzantine fault-tolerant consensus algorithm using learning automata for voting decision is proposed.Learning automata are embedded into blockchain nodes instead of nodes to complete consensus voting-related operations,reducing the impact of malicious operations of nodes on the system.The consensus decision is made by the master node and its neighboring learning nodes,the master node gives feedback to the learning nodes based on the criterion function and the overall voting result,the learning nodes adjust their voting strategy based on the feedback,and the consensus is reached when the criterion function of the master node converges.The proposed strategy accelerates the convergence of consensus,adjusts the rules of learning automata to reduce the influence of faulty nodes,and uses the reward and punishment mechanism to improve the enthusiasm of normal nodes to participate in the consensus process and reduce the consensus delay.Experimental results show that the consensus algorithm proposed in this paper has a lower complexity compared to the existing algorithms in large-scale node scenarios,and also shows a better fault tolerance in the face of Byzantine nodes,which reduces the impact of the faulty nodes while maintaining the fast transactions and ensures the scalability and fault tolerance of large-scale node networks.

    Research on the packet classification algorithm based on the intelligent rule storage matching model
    LI Zhuo, WANG Tongtong, LIU Kaihua
    Journal of Xidian University. 2024, 51(6):  149-158.  doi:10.19665/j.issn1001-2400.20240913
    Abstract ( 57 )   HTML ( 2 )   PDF (2656KB) ( 19 )   Save
    Figures and Tables | References | Related Articles | Metrics

    In the research on packet classification technology,designing an efficient index structure to achieve fast packet matching is the key to effective packet classification.Therefore,in order to improve the memory utilization of packet classification technology based on hash-based methods,a rule storage matching model is proposed to achieve more uniform mapping and reduce hash collisions.This model can support more uniform rule storage mapping within a classification subset.Based on this model,a packet classification algorithm is proposed,which uses the Prefix Length Reduction algorithm to divide the ruleset into multiple subsets,and then the same storage structure is allocated for each subset.This storage structure includes an identification information processing unit,a mapping model unit and a rule query matching unit.In the packet classification stage,the identification information processing unit converts the packet headers into multi-dimensional vectors and then these vectors are input to the mapping model unit.Based on the output of the model,the matching rules can be retrieved from the rule query matching unit.Experimental results show that,compared with the existing algorithms,the proposed algorithm doubles the classification throughput,reduces the storage consumption by about 20% on average and increases the update speed by 3 times on average with the uniformity of the rule mapping also improved.

    Retina grading algorithm integrating PVTv2 and dynamic perception
    LIANG Liming, JIN Jiaxin, LI Yulin, DONG Xin
    Journal of Xidian University. 2024, 51(6):  159-170.  doi:10.19665/j.issn1001-2400.20240801
    Abstract ( 106 )   HTML ( 1 )   PDF (5032KB) ( 20 )   Save
    Figures and Tables | References | Related Articles | Metrics

    To address the challenges of image quality variations and difficulty in lesion area recognition in retinal lesion grading,a novel retinal grading algorithm that integrates PVTv2 and dynamic perception is proposed.Initially,the Eye-Quality dataset undergoes the quality assessment followed by lesion grading.The algorithm first preprocesses the dataset with grayscale conversion and Gaussian filtering to enhance the contrast between lesion regions and background noise.Subsequently,the PVTv2 backbone network performs multi-scale feature extraction on retinal images,achieving the comprehensive capture of multi-scale textural features.Parallel region perception modules and channel reconstruction units are employed to suppress background interference and focus on lesion features,enhancing feature recognition capabilities.A dynamic adaptive feature fusion module establishes connections between global semantic information and edge details.Finally,a hybrid loss function alleviates class imbalance issues,further improving the retinal grading performance.In quality grading experiments on the Eye-Quality dataset,the accuracy and precision are 88.68% and 87.72%,respectively.Then,using accuracy as the evaluation metric for lesion grading based on this foundation,the rates for excellent,usable,and rejectable categories are 80.23%,74.37%,and 73.73%,respectively.These results highlight the impact of image quality differences on lesion grading effectiveness,providing a new avenue for intelligent auxiliary diagnosis in retinal grading.

    Improved SwinIR for multi-feature fusion image super-resolution reconstruction
    WANG Jinhua, WEI Ting, CAO Jie, CHEN Li
    Journal of Xidian University. 2024, 51(6):  171-181.  doi:10.19665/j.issn1001-2400.20240911
    Abstract ( 115 )   HTML ( 7 )   PDF (2778KB) ( 28 )   Save
    Figures and Tables | References | Related Articles | Metrics

    In the process of image super-resolution reconstruction based on the advanced SwinIR method,there is a problem that the local information modeling ability of low-resolution images is insufficient,resulting in inadequate feature extraction and poor quality of reconstructed images.An improved SwinIR multi-feature fusion image super-resolution reconstruction method is proposed.In the deep feature extraction module,the proposed algorithm first designs several series residual Swin Transformer blocks(RSTB),uses the Swin Transformer layer(STL) of RSTB for long-distance dependent modeling to extract high-frequency image information,and uses residual connections to achieve different levels of feature aggregation.Second,an alternating series spatial attention module and channel attention module(SA-CA) are designed to make up for the lack of local modeling ability of RSTB,so that the network can capture the missing context information on image space and channel dimension,and promote the reconstruction of edge details.Finally,the summation of shallow features and deep features is fused and transmitted to the reconstruction module for high-quality image reconstruction through along jump connection.Experimental results show that in the four test sets with magnifications of 2,3,and 4,the proposed improved algorithm achieves better results than SwinIR in terms of the peak signal-to-noise ratio and structural similarity,and the edge structure and overall contour of the reconstructed image are clearer in terms of visual effects.

    CNN-GRU speech emotion recognition algorithm for self-supervised comparative learning
    SUN Zhi, WANG Guan
    Journal of Xidian University. 2024, 51(6):  182-193.  doi:10.19665/j.issn1001-2400.20241109
    Abstract ( 93 )   HTML ( 4 )   PDF (4316KB) ( 23 )   Save
    Figures and Tables | References | Related Articles | Metrics

    In the field of online speech emotion recognition,the difficulty in training models often arises from the vast input data,making it challenging to capture meaningful emotional cues within the speech or meet high concurrent real-time demands due to complex network structures.To address this problem,a lightweight online speech emotion recognition method using self-supervised contrastive learning of CNN-GRU(Convolution Neural Network-Gated Recurrent Units) is proposed.By pretraining the model with the loss function of Contrastive Predictive Coding(CPC),it learns crucial emotional feature expressions from a lengthy conversational speech,addressing the issue of speech feature extraction.A lightweight encoder-decoder network structure of CNN-GRU is designed to meet real-time detection requirements.Raw audio is fed into a 1D-CNN to extract audio features,which are then passed through GRU units to obtain emotion category labels.Experimental comparisons conducted on thousands of recorded dialogues from a call center platform demonstrate that the proposed method achieves an accuracy of 96.97%.The results indicate the superiority of this approach in scenarios involving natural conversations and strong real-time constraints.

    Machine learning-assisted trust evaluation scheme for emergency messages in VANETs
    ZHOU Hao, SHAO Shiyun, MA Yong, LIU Zhiquan, GUAN Quanlong, WANG Xiaoming
    Journal of Xidian University. 2024, 51(6):  194-203.  doi:10.19665/j.issn1001-2400.20230605
    Abstract ( 83 )   HTML ( 1 )   PDF (1339KB) ( 25 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Vehicular ad hoc networks(VANETs) are an important part of intelligent transportation systems and can improve traffic safety and efficiency through the dissemination of vehicle messages.However,the dissemination of false emergency messages by malicious vehicles poses a serious threat to the normal operation of VANETs and traffic safety.To address the low accuracy of message trust evaluation in existing trust management schemes for VANETs with a high proportion of malicious vehicles,a machine learning-assisted trust evaluation scheme is proposed which optimizes the existing trust evaluation algorithm by introducing a random forest model to assist roadside units in analyzing emergency messages and outputting the prediction probability of messages being true.Based on the prediction probability output by the random forest model,a switchable caching mechanism is designed,and a trust value query algorithm is designed to balance the conflict between query efficiency and storage space overhead of roadside units in the existing scheme.Meanwhile,the prediction probability is used as a reference factor in the trust evaluation algorithm to obtain a higher message evaluation accuracy.Finally,the proposed scheme is compared with the existing scheme,and experimental results show that the message trust evaluation accuracy of the proposed scheme is improved by approximately 6.2%~21.9% and that the proposed scheme exhibits good robustness under several proportions of malicious vehicles.

    Public-key searchable encryption scheme for supporting fast range search
    DING Yong, WENG Nengxiang, WANG Haiyan, LUO Fucai
    Journal of Xidian University. 2024, 51(6):  204-214.  doi:10.19665/j.issn1001-2400.20240908
    Abstract ( 113 )   HTML ( 2 )   PDF (1264KB) ( 31 )   Save
    Figures and Tables | References | Related Articles | Metrics

    In recent years,cloud storage services have gradually become the mainstream method for data storage,but they have also brought challenges to data privacy protection.Public Key Searchable Encryption technology allows users to perform keyword searches on encrypted data without decrypting it,thereby providing the convenience of data retrieval while protecting privacy,so that it has been widely used.However,most searchable encryption schemes currently suffer from low efficiency in range searches and vulnerability to keyword guessing attacks.To address this issue,this paper constructs a public-key searchable encryption scheme that supports efficient range searches using 0-1 encoding,and introduces public key authentication to enable collaborative encryption between sender and receiver,thus preventing third parties from constructing valid ciphertexts and trapdoors and ensuring the security of the scheme.To improve the efficiency of search computations,this paper constructs ciphertext indices using trapdoor search records,comparing the search ranges of new and old trapdoors and combining ciphertext indices to reduce the number of ciphertexts that need to be compared,thus achieving fast searches.Security analysis shows that this scheme can resist keyword guessing attacks from cloud servers,and experimental results demonstrate that the ciphertext indices of this scheme can effectively improve the efficiency of ciphertext searches.