Loading...
Office

Table of Content

    20 December 2021 Volume 48 Issue 6
      
    Special Issue:Key Technology of Architecture and Software for Intelligent Embedded Systems
    Introduction to the special issue on key technology of architecture and software for intelligent embedded systems
    WANG Quan,YANG Tianruo,ZHU Dakai,DENG Qingxu,GUO Bing,CHEN Minsong,DONG Yunwei,YAN Yi,JIANG Jianhui,ZHANG Kailong,XIE Guoqi,ZHOU Junlong
    Journal of Xidian University. 2021, 48(6):  1-7.  doi:10.19665/j.issn1001-2400.2021.06.001
    Abstract ( 369 )   HTML ( 436 )   PDF (1706KB) ( 205 )   Save
    References | Related Articles | Metrics
    Dynamic semi-online task scheduling method for the edge computing platform
    ZHAO Hui,FENG Nanzhi,WANG Quan,WAN Bo,WANG Jing
    Journal of Xidian University. 2021, 48(6):  8-15.  doi:10.19665/j.issn1001-2400.2021.06.002
    Abstract ( 326 )   HTML ( 41 )   PDF (855KB) ( 129 )   Save
    Figures and Tables | References | Related Articles | Metrics

    When there are known and unknown computing nodes in the edge computing platform,the task scheduling in this scene is called semi-online task scheduling.Due to the influence of unknown nodes,the normal task scheduling method may lead to a long makespan or transmission time,which aggravates the problem of high energy consumption on the edge computing platform.To solve this problem,this paper proposes a Dynamic semi-online task Scheduling Strategy (DSS) for the edge computing platform,aiming at energy consumption optimization.First,by considering the main factors affecting the energy consumption on the edge computing platform,the energy consumption of task execution,task transmission and idle are introduced from the perspectives of the processing speed,routing delay and queue delay of the edge nodes,and then an energy consumption optimization-oriented task scheduling model is established.Second,for the unknown node,this paper proposes a dynamic mapping-based semi-online task scheduling algorithm which assumes that the performances of unknown nodes are equal to a certain given node to form the mapping between unknown and known nodes.Then this algorithm dynamically adjusts the mapping relation of both sides through continuous perception of their task queue lengths,thus making full use of prior knowledge and reducing the energy consumption.Finally,a comparative evaluation is performed on the CloudSim platform,with the results showing that the proposed method can effectively reduce the energy consumption on the edge computing platform.

    Multi-node cooperative game load balancing strategy in the Kubernetes cluster
    LI Huadong,ZHANG Xueliang,WANG Xiaolei,LIU Hui,WANG Pengcheng,DU Junzhao
    Journal of Xidian University. 2021, 48(6):  16-22.  doi:10.19665/j.issn1001-2400.2021.06.003
    Abstract ( 408 )   HTML ( 33 )   PDF (787KB) ( 81 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Kubernetes has the potential to become a new generation of hyper-converged architecture,but there are still some problems in the research on cluster resource load balancing:on the one hand,most of the existing scheduling algorithms are static scheduling algorithms,and the dynamics of cluster resources in actual use are not considered; on the other hand,the existing research on solving cluster resource load balancing only optimizes the CPU and memory resources in the cluster,but cannot give a complete resource profile of the cluster,with the algorithm lacking comprehensiveness.In this paper,we introduce the MBCGT,a multi-resource load balancing algorithm based on the cooperative game theory for Kubernetes cluster scheduling,and proposes the index of cluster resource load balance to optimize scheduling and reduce the fragmentation of cluster resources.First,we use real-time monitoring to obtain the actual resource usage of service requests to achieve dynamic service scheduling.Second,we consider the load balance among the four resources of cluster CPU,memory,network bandwidth and disk IO,establish a cooperative game model between physical nodes to ensure the lower bound of the MBCGT algorithm when facing different resource requests of various applications,and solve the cluster Resource fragmentation phenomenon.Finally,the algorithm is tested in a real Kubernetes cluster.The results show that the MBCGT algorithm reduces the fragmentation of cluster resources,and that the average load balance of each node in the cluster can be increased by 8.40%.

    Harnessing adversarial examples via input denoising and hidden information restoring
    LIU Jiawei,ZHANG Wenhui,KOU Xiaoli,LI Yanni
    Journal of Xidian University. 2021, 48(6):  23-31.  doi:10.19665/j.issn1001-2400.2021.06.004
    Abstract ( 252 )   HTML ( 23 )   PDF (1659KB) ( 94 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Although deep learning has achieved great success in various applications,the deep neural networks (DNNs) are vulnerable to the attack of adversarial samples with imperceptive perturbation information,which makes the robustness and performance of DNNs decrease greatly.To overcome the weakness of the existing denoising algorithms against adversarial samples,which destroys the information on clean samples,leading to reduction in CNN sclassification accuracy,this paper presents a novel enhanced denoising algorithm ID+HIR(Input Denoising andHidden Information Restoring)for adversarial samples.Our ID+HIR is made up of an enhanced input denoising and hidden lossy information restoring based on the theory of convex hull.The algorithm first trains a denoiser on the input layer of the model,with the input of the denoiser being the concatenation of clean and adversarial samples,and the denoiser is expected to remove the adversarial perturbations while avoiding the forgetting of clean samples.Since the denoiser destroys the perturbation information contained in the clean samples,a restorer is trained in the hidden layer of the model,with the input of the restorer being a convex combination of the hidden vectors of the clean and adversarial samples,expecting the restorer to remap the samples located in the incorrect classification space back to the correct classification space,thus training a more robust model.Extensive comparative simulation experiments on several standard datasets show that the denoiser and the recoverer proposed in this paper can effectively improve the robustness of the model,and extensive experiments on benchmark datasets show that our proposed algorithm ID+HIR is superior to the competitive baselines.

    Optimization of task scheduling oriented to cross microservice chains
    ZHANG Yupeng,WU Zili,CHEN Ming,ZHANG Lulu
    Journal of Xidian University. 2021, 48(6):  32-39.  doi:10.19665/j.issn1001-2400.2021.06.005
    Abstract ( 149 )   HTML ( 28 )   PDF (813KB) ( 57 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The microservice architecture arranges an application as a set of loosely-coupled fine-grained services,with each microservice independently deployed and updated.The cooperation of services leads to multiple intersecting microservice chains.And the intersection of microservices becomes a key position for resource competition.Therefore,rational allocation of microservices can improve resource utilization,reduce the task response time and solve the problem of resource competition caused by the intersection of microservice chains.However,existing research often ignores or simplifies the conflict problem caused by the intersection of microservice chains,resulting in poor system scheduling.Therefore,aiming at the above problem,this paper takes the resource utilization and the global response time as the measurement indicators to formally characterize the resource consumption of services and the task execution time in the microservice architecture.Combined with the advantages of parallel computing of the ant colony algorithm and local perturbations of simulated annealing algorithm,this paper proposes a chain-oriented task scheduling algorithm (COTSA).Experimental results show that compared with first come first service (FCFS) and ant colony optimization (ACO),the COTSA can effectively improve resource utilization and reduce the overall response time in the complex microservice environment.

    Embedded heterogeneous computing service placement strategy for fog computing
    LIU Jinhui,YI Bijie,ZHANG Hao
    Journal of Xidian University. 2021, 48(6):  40-47.  doi:10.19665/j.issn1001-2400.2021.06.006
    Abstract ( 221 )   HTML ( 16 )   PDF (1084KB) ( 40 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Limited by the long distance communication between the cloud and the end device,processing data only on the cloud can no longer meet the needs of time-sensitive applications,which prompts some applications to expand to the lower edge devices.With the rapid development of embedded systems,fog computing has become a new computing paradigm that connects cloud and end devices to execute applications closer to data sources.The computing capability of the fog layer is usually derived from high-performance heterogeneous embedded board.Different mapping and placement strategies of services in the fog layer have a great impact on resource utilization of devices in the fog layer.Most of the existing service placement strategies aim at improving the system Quality of Service (QoS),but ignore the heterogeneity of embedded devices and the limitation of computing resources,which leads to the decrease in resource utilization.To solve the above problems,this paper proposes a service placement strategy for fog computing applications.Based on the micro-service architecture,the heterogeneous resources in the fog computing layer are optimized and modeled,and the heterogeneous resource attribute characterization is refined.On the basis of ensuring the system,the system resource utilization rate is improved through dynamic comparison of service placement consumption.Comparing the proposed strategy with both the request rate-based placement strategy and the iFogSim default placement strategy,the system resource utilization of the proposed strategy increases by 10.7% and 28.7%,respectively.

    Efficient self-supervised meta-transfer algorithm for few shot learning
    SHI Jiahui,HAO Xiaohui,LI Yanni
    Journal of Xidian University. 2021, 48(6):  48-56.  doi:10.19665/j.issn1001-2400.2021.06.007
    Abstract ( 251 )   HTML ( 19 )   PDF (1144KB) ( 82 )   Save
    Figures and Tables | References | Related Articles | Metrics

    A key difficulty of current deep learning is the problem of few shots.Although some more effective few-shot algorithms/models have appeared,the existing deep models have limited features and the ability of the models to make generalization is low.If the distribution of the data in the new class and that of the data in the training dataset differ greatly,the classification result will be poor.In view of the above-mentioned shortcomings of the existing algorithms,the author proposes the residual attention dilation convolutional network as the feature extractor of the network model.The design of dilation branch increases the model’s receptive field and can extract features of different sizes.Image-based residual attention enhances the model’s attention to important features.A self-supervised network model pre-training algorithm is proposed.The self-supervised method is used in the pre-training stage to rotate the image data at different angles and establish corresponding labels.The rotation classifier based on the image structure information is designed to increase the supervision information in the training task so as to enhance the further mining of data information and the ability of the algorithm to make generalization.On the benchmark few-shot datasets miniImageNet and Fewshot-CIFAR100,the algorithm proposed in this paper is compared with the latest and best few-shot algorithm,with experimental results showing that the algorithm in this paper has achieved the latest and best performance.

    Analysis of the relationship between the performance of the BC and the format of the graph data
    JIANG Lin,FENG Ru,DENG Junyong,LI Yuancheng
    Journal of Xidian University. 2021, 48(6):  57-66.  doi:10.19665/j.issn1001-2400.2021.06.008
    Abstract ( 214 )   HTML ( 9 )   PDF (2921KB) ( 34 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The data compression format in graph computing is one of the key influencing factors of graph algorithm memory access efficiency and performance.Based on this,this paper focuses on how the BC (Betweenness Centrality) algorithm selects the appropriate compression format according to performance requirements to improve the performance of the graph computing system.The hardware performance counter on the Skylake Xeon(R) Platinum 8164 processor is used to analyze different data,with acollection of five compression formats of COO,CSC,CSR,DCSC and CSCI adopted for performance evaluation and analysis.Performance evaluation indicators include execution time,calculation volume,data movement volume,and power consumption.The evaluation results show that when hardware resources are limited,the CSR compression format performs best when processing the traversal-centric BC algorithm,which can effectively reduce the program execution time,data movement,and power consumption.Using the CSC compression format can effectively reduce the Cache miss rate and lead to a better use of data locality.Using the DCSC compression format can improve the graph data storage efficiency when considering memory usage.Using the CSCI compression format has certain advantages in the data parallelism of hardware accelerators,but for general-purpose processors the graph application is not ideal.The COO compression format is relatively poor in improving the performance of graph computing applications.Analytical results provide a basis for how the BC algorithm selects preprocessing methods according to different performance requirements

    JEDERL:A task scheduling optimization algorithm for heterogeneous computing platforms
    LV Wenkai,YANG Pengfei,DING Yunqing,ZHANG Heyu,ZHENG Tianyang
    Journal of Xidian University. 2021, 48(6):  67-74.  doi:10.19665/j.issn1001-2400.2021.06.009
    Abstract ( 538 )   HTML ( 28 )   PDF (2637KB) ( 123 )   Save
    Figures and Tables | References | Related Articles | Metrics

    With the rapid development of GPU,FPGA,and other computing units,the heterogeneous computing platform is widely used in cloud computing,the data center,the Internet of things,and other fields because of its rich computing resources,flexible architecture,and strong parallel processing capability.Aiming at the task scheduling problem of heterogeneous computing resources and lack of global task information for heterogeneous computing platforms,the task execution model is carried out according to the attributes of tasks and computing resources.Then,we use graph neural networks to encode the scalable state information on tasks and computing resources,and the characteristic of tasks and computing resources are aggregated on three levels,which solves the problems of the uncertain number of tasks and lack of global information.To minimize the average task completion time,we design a task scheduling algorithm based on the Deep Deterministic Policy Gradient(DDPG).Experimental results show that compared with Random scheduling,First in First Out scheduling,Shortest Job First scheduling,Roulette scheduling,and existing reinforcement learning scheduling algorithm,the average task completion time of our algorithm(JEDERL,Job Embedding Device Embedding Reinforcement Learning)is reduced by 27.8%,12.6%,28.6%,21.9%,and 5.3%,respectively and that the proposed algorithm stays stable when the number of cluster servers and tasks changes.

    Novel and efficient algorithm for entity relation extraction with the corpus knowledge graph
    HU Daiwang,JIAO Yiyuan,LI Yanni
    Journal of Xidian University. 2021, 48(6):  75-83.  doi:10.19665/j.issn1001-2400.2021.06.010
    Abstract ( 310 )   HTML ( 17 )   PDF (877KB) ( 61 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Entity relation extraction aims to extract the semantic relation between two entities in a given sentence.Entity relation extraction is a basic and important task in information extraction and natural language processing.Although some good entity relation extraction deep learning algorithms have been presented,how to make full use of corpus information and extract the relationship between entities in a sentence effectively to further improve the accuracy of the model still faces challenges.In this paper,a new entity semantic relation graph is constructed based on the training corpus,which can be extended as the testing goes on.The entity semantic relation graph is used to globally capture the semantic relation correlations between entities from all the sentences in the corpuses.And then,a large number of “other” relations existing in the corpus are selected as negative samples to be trained to improve the classification performance.Finally,equipped with the light pre-trained ALBERT,a graph convolutional network,and the negative sample learning triplet loss,we present a new RE method,which can continuously summarize and perfect the knowledge related to the entity pairs to be extracted,and effectively improve the accuracy of entity relation extraction.Extensive experiments on the SemEval-2010 Task 8 and TACRED benchmark show that our proposed algorithm achieves a better performance than the competitive baselines.

    Optimization of large-scale graph traversal for supercomputers
    TAN Wen,GAN Xinbiao,BAI Hao,XIAO Tiaojie,CHEN Xuguang,LEI Shumeng,LIU Jie
    Journal of Xidian University. 2021, 48(6):  84-95.  doi:10.19665/j.issn1001-2400.2021.06.011
    Abstract ( 391 )   HTML ( 14 )   PDF (1982KB) ( 69 )   Save
    Figures and Tables | References | Related Articles | Metrics

    In the big data era,with the significant development of graph data,the demand for computing resources is growing rapidly.Supercomputers are applied to process large-scale graph data,which puts forward higher requirements for the storage and computing capabilities of supercomputers.In order to efficiently process large-scale graph data and evaluate the graph processing capabilities of the Tianhe supercomputer,in this paper we propose a graph traversal optimization technique for improving the efficiency of the benchmark program of Graph500,an important benchmark for evaluating graph processing capabilities of supercomputer.The technique mainly adopts the vertex sorting and priority caching strategy,where the vertices in the graph are sorted by degree in a descending order and some key vertices are stored in the cache of the core group of the Tianhe system.Therefore,this technique cuts down on invalid memory access and reduces the communication overhead between processes for maximizing the usage of the bandwidth for the supercomputer system.In order to validate graph traversal based on vertex sorting and buffering,an optimized graph500 version named VS-graph500 is customized for the Tianhe supercomputer,experimental results demonstrate that the VS-graph500 has a significant acceleration and good scalability in the supercomputers testing system,and attains a stable testing performance at 2547.13EGTEPS when the graph testing scale is 37,which is superior to the 7th in Graph500 list in June 2020.

    Information and Communications Engineering
    Recognition algorithm for the little sample radar modulation signal based on the generative adversarial network
    YU Haoyang,YIN Liang,LI Shufang,LV Shun
    Journal of Xidian University. 2021, 48(6):  96-104.  doi:10.19665/j.issn1001-2400.2021.06.012
    Abstract ( 407 )   HTML ( 29 )   PDF (2302KB) ( 122 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Radar modulation recognition technology plays a large part in electronic reconnaissance,electronic support and other traditional areas.Existing radar modulation signal recognition algorithms are usually implemented using intra-pulse feature extraction or deep learning techniques.Both methods have disadvantages.Extracting intra-pulse features requires complex prior knowledge.Although deep learning technology does not require complex prior knowledge,it is data-driven,which needs massive data to support its training,and it is very difficult to obtain radar signal data.It is also difficult to build a complex and huge data set to represent the model.Therefore,the need for a small-sample recognition method for deep learning technology becomes urgent.In this paper,we propose an algorithm for radar modulation signal recognition based on Enhanced Deep Convolution Generative Adversarial Networks (SDCGAN) and the Convolutional Neural Network (CNN) to achieve data enhancement.Under the condition of small samples,it is still possible to achieve high-precision identification of a variety of radar modulated signals.This paper verifies the superiority of the SDCGAN-CNN algorithm over other algorithms and the effectiveness of signal recognition under small sample conditions through comparative experiments.Under the condition of a relatively high signal-to-noise ratio,the recognition accuracy rate of other generation adversarial network and convolutional neural network methods is improved by 4%,and the recognition accuracy rate of convolutional neural network methods is increased by 10%.

    Passive localization based on energy-time-frequency information fusion
    WAN Pengwu,YAO Yuanyuan,YAN Qianli,CHEN Yufei
    Journal of Xidian University. 2021, 48(6):  105-114.  doi:10.19665/j.issn1001-2400.2021.06.013
    Abstract ( 229 )   HTML ( 9 )   PDF (1177KB) ( 30 )   Save
    Figures and Tables | References | Related Articles | Metrics

    With the increasing complexity of wireless communication environment,it is difficult to meet the requirement of the high-precision source positioning based on single domain information (such as energy domain,time domain,frequency domain,airspace domain,etc.).In order to solve the problem of the deterioration of source localization performance in the non-line-of-sight environment,this paper proposes a passive source location technology with energy-time-frequency domain information fusion,and utilizes the iterative method of non-line-of-sight error feedback to obtain the high precision location of the source target.Source localization in a reasonable complex environmental scenario is conducted and a thorough analysis of the three-domain localization parameter model is made.The non line-of-sight error in each domain is taken as unknown to construct the multiple domain information joint localization equation.Based on the analysis of the three-domain joint maximum likelihood equation used in the process of solving the non-line-of-sight error statistic replacement,the reasonable simplified equation is completed.The distance square and weighted least squares are introduced to transform the non-convex problem into a generalized trust region subproblem.Without the prior information on the non-line-of-sight parameters,the initial estimation of the source position and velocity parameters as well as the non-line-of-sight error can be obtained simultaneously by the dichotomy method.In order to ensure the location accuracy in a complex environment,the iterative method is used to improve the estimation accuracy of the source location parameters by taking the non-line-of-view deviation as feedback.The effectiveness and reliability of the proposed algorithm are verified by computer simulation.

    Trajectory and resource allocation for multi-UAV enabled swipt systems
    TIAN Lin,SU Zhijie,FENG Wanmei,CHEN Zhen,TANG Jie,ZHOU Encheng
    Journal of Xidian University. 2021, 48(6):  115-122.  doi:10.19665/j.issn1001-2400.2021.06.014
    Abstract ( 323 )   HTML ( 17 )   PDF (929KB) ( 67 )   Save
    Figures and Tables | References | Related Articles | Metrics

    This paper considers a multi-unmanned aerial vehicles (UAVs) enabled simultaneous wireless information and power transfer (SWIPT) system,where a UAV acts as a flying base station (BS) and delivers information and energy to users located in rural and geographically constrained areas.We aim to maximize the minimum achievable rate of users by optimizing the user association,UAV trajectory and power allocation subject to the minimum energy storage capacity requirement.However,this problem is a mixed-integer non-convex optimization problem,and cannot be solved directly.To tackle this problem,we propose an iterative algorithm based on the block coordinate descent,where the user association,UAV trajectory and transmit power are alternately optimized.Especially,the successive convex approximation (SCA) technique is applied to transform the non-convex constraints into convex functions.Simulation results are provided to demonstrate the convergence behavior and the great performance of the proposed algorithm in terms of the minimum achievable rate of all users in multi-UAV enabled SWIPT networks.

    Dense three-dimensional reconstruction algorithm based on spatially encoded structured light
    CHEN Rong,XU Hongli,YANG Dongxue,HUANG Hua
    Journal of Xidian University. 2021, 48(6):  123-130.  doi:10.19665/j.issn1001-2400.2021.06.015
    Abstract ( 318 )   HTML ( 15 )   PDF (2066KB) ( 81 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The 3D reconstruction technology based on structured light has the advantages of high accuracy,fast speed,and good robustness.It is widely used in the fields of biomedicine and cultural relics restoration.It is of important theoretical significance and research value.With the rapid development of science and technology,the demand for non-contact detection and measurement of high-speed moving targets has gradually increased.The 3D reconstruction of targets with few features and fast speed has become the key to solving the problem.Compared with time coding,spatial coding only needs to shoot a single pattern,which is more suitable for 3D reconstruction of moving targets.This paper proposes a novel dense 3D reconstruction method based on spatially encoded structured light.First,a structured light encoding mode composed of sine monochrome stripes and pseudo-random points is constructed.The stripes are identified by pseudo-random points to reduce the color interference; second,a dense 3D reconstruction method based on local phase matching is proposed.The spatial resolution of point cloud is improved by obtaining the phase information on pixels between adjacent stripes for pixel matching.Finally,the method can enhance the speed of 3D reconstruction,which uses only one image pair to decode,and does not need to calibrate the projector and color correction.Experimental results show that the algorithm is feasible and effective.

    Study on the vibration response mechanism of gear root crack and spalling
    WAN Zhiguo,HE Wangpeng,LIAO Nannan,DOU Yihua,GUO Baolong
    Journal of Xidian University. 2021, 48(6):  131-137.  doi:10.19665/j.issn1001-2400.2021.06.016
    Abstract ( 311 )   HTML ( 19 )   PDF (3050KB) ( 47 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Root crack and spalling are common gear faults in a gear transmission system.Due to the lack of research on the fault mechanism,these two fault types cannot be accurately identified by the current gear fault diagnosis methods.Based on the energy method,an analytical model is established to analyze and compare the different mechanisms of the influence of the two faults on the time-varying meshing stiffness.A dynamic model of the gear system is established to analyze the vibration response mechanism of the root crack and spalling fault.By comparison,it is found that although the root crack and spalling fault will make the system produce the periodic vibration and impact response,the number and law of impact response produced by the two faults in one meshing cycle are quite different.The research results reveal the differences in vibration response mechanism between root crack and tooth spalling faults,which provides a theoretical basis for the accurate diagnosis of these two kinds of faults.

    Fast calculation of the reflection coefficient of the lossy symmetrical dielectric layer
    SUN Xiangang,WEI Bing,SHI Lei
    Journal of Xidian University. 2021, 48(6):  138-143.  doi:10.19665/j.issn1001-2400.2021.06.017
    Abstract ( 200 )   HTML ( 8 )   PDF (999KB) ( 41 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Symmetrical dielectric layer structures are quite common in real life,especially the case of loss being in the majority,such as many artificial materials and absorbing materials.In order to grasp the electromagnetic transmission effect of these structures,it is necessary to study their electromagnetic reflection and transmission characteristics.Aiming at the properties of this structure,the equivalent four-terminal network method is used to study the reflection characteristics of the lossy symmetrical dielectric layer structure.By drawing lessons from transmission line theory and the idea of propagation matrix,this method realizes the simplified processing of the layered interface matrix,and only considers the spatial phase matrix by the way that the transmission of plane electromagnetic waves in a lossy symmetrical medium layer is equivalent to a four-terminal network.Compared with the propagation matrix method,the number of matrix multiplications is reduced,thereby reducing the amount of calculation and realizing rapid calculation of the reflection coefficient.In the photonic crystal example of the article,the calculation efficiency of the equivalent four-terminal network method is increased by more than 10 times compared with the propagation matrix method,and the time the propagation matrix method takes has exceeded 1 hour.Simulation examples show that the results obtained by the four-terminal network method are consistent with the calculated results of the propagation matrix method and the FDTD method,and that the calculation efficiency is much higher than that of the propagation matrix method when calculating multiple frequency points or broadband.

    Computer Science and Technology
    Algorithm for half-space MLFMA domain decomposition utilizing an octree
    ZHAI Chang,LIN Zhongchao,ZHAO Xunwang,ZHANG Yu
    Journal of Xidian University. 2021, 48(6):  144-150.  doi:10.19665/j.issn1001-2400.2021.06.018
    Abstract ( 201 )   HTML ( 13 )   PDF (1767KB) ( 35 )   Save
    Figures and Tables | References | Related Articles | Metrics

    In order to quickly and accurately analyze the electromagnetic scattering of electrical large objects in half space under the condition of limited resources,a parallel half-space multi level fast multipole algorithm (MLFMA) with the domain decomposition method utilizing an octree is proposed.By using the octree structure formed by the MLFMA,the unknowns are grouped adaptively to realize the domain decomposition,thereby avoiding the creation of artificial interface between domains and reducing the workload of model processing.To ensure the current continuity between domains,the 1/4 impedance on the boundary of the domain is rigorously calculated,with the results more accurate.To deal with the half-space environment,the complex image source is introduced to calculate the near interaction,and the real image source is introduced to calculate the far interaction.A comparison of numerical results from the proposed algorithm and the commercial software FEKO is given,which proves the reliability and accuracy of the algorithm.An out-of-core algorithm is used to store data such as translators in the hard disk,which can significantly reduce the memory consumption.Finally,a numerical example including a 1000-wavelength ship model in half space is given,which demonstrates that the proposed algorithm can simulate electrical large objects under the condition of limited resources.

    Application of least squares loss in the multi-view learning algorithm
    LIU Yunrui,ZHOU Shuisheng
    Journal of Xidian University. 2021, 48(6):  151-160.  doi:10.19665/j.issn1001-2400.2021.06.019
    Abstract ( 168 )   HTML ( 11 )   PDF (928KB) ( 32 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The SVM-2K model is a multi-view learning algorithm using nonsmooth hinge loss.However,the solution process of nonsmooth model is more complex.The LSSVM with smooth least squares loss is introduced as a classical support vector machine algorithm which is widely used in the scientific research field because of its simple calculation,fast operation speed and high precision.In order to improve the training speed of the model,the least square idea is introduced into the SVM-2K.First,the LSSVM-2K model which fully applies the least square loss is proposed.The least square loss is used to replace the hinge loss in the SVM-2K model.The quadratic programming method of the classical multi-view learning model can be replaced by solving the linear equations; second,in order to explore the influence of least squares loss on the SVM-2K model,two other models using least squares loss are proposed,LSSVM-2KI and LSSVM-2KII.In this paper,the new model and other multi-view learning models:SVM+ (which can be divided into SVM+A and SVM+B),MVMED,RMvLSTSVM and SVM-2K are applied to three sets of data sets:animal feature data set (AWA),UCI handwritten digits (Digits) and forest coverage area to test the effectiveness of the new model.Experimental results show that the three new models have a good classification performance.In addition,the LSSVM-2KI model has more advantages in classification accuracy.The LSSVM-2K model not only has a better classification accuracy,but also has great advantages in calculation speed.The LSSVM-2KII model lies between the two in classification effect and training time.

    Semi-supervised word sense disambiguation by combining k-means clustering and the LSTM network
    ZHANG Chunxiang,ZHOU Xuesong,GAO Xueyao,LIU Huan
    Journal of Xidian University. 2021, 48(6):  161-171.  doi:10.19665/j.issn1001-2400.2021.06.020
    Abstract ( 211 )   HTML ( 17 )   PDF (1441KB) ( 56 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Polysemy is an inherent characteristic of the natural language.The word sense disambiguation(WSD) is to determine the meaning of an ambiguous word according to its context,which is a key technology in the natural language processing field.Now,the WSD is widely applied to machine translation,information retrieval and text classification.In order to improve the accuracy of the WSD,a semi-supervised WSD method is proposed based on the k-means clustering method and the Long Short Term Memory (LSTM).The ambiguous word is used as its center.Its two left and right adjacent lexical units are selected to construct the word window whose size is 4.Morphology and semantic classes are extracted as clustering features from the word window.The k-means clustering method is used to cluster the unlabeled corpus.The clustered corpus is added into the SemEval-2007:Task#5 training corpus to expand the size of the training corpus.The morphology,part-of-speech,semantic category,English translation and disambiguation distance are extracted as disambiguation features from the word window.The LSTM network is used to determine semantic categories of ambiguous words.The expanded corpus is applied to optimize LSTM parameters.The SemEval-2007:Task#5 test corpus is used to test the WSD classifier.Experiments are conducted to analyze the influence of hidden layer number and training corpus scale on the WSD.Experimental results show that the proposed method can improve the WSD accuracy compared with bayesian classifiers and deep belief networks.

    Synthesis of linear arrays with sidelobe suppression and null steering using the improved coyeto optimization algorithm
    GUO Qiang,LI Jiaying,WANG Yani
    Journal of Xidian University. 2021, 48(6):  172-178.  doi:10.19665/j.issn1001-2400.2021.06.021
    Abstract ( 156 )   HTML ( 11 )   PDF (2443KB) ( 30 )   Save
    Figures and Tables | References | Related Articles | Metrics
    Aim

    ing at the problems in the synthesis of uniformly excited linear arrays with a minimum sidelobe level (SLL) and null steering,a linear array synthesis strategy of the improved coyeto optimization algorithm (ICOA) is proposed.Based on the coyeto optimization algorithm (COA),the suboptimal individual variation strategy and the global optimal intra-group guidance strategy are proposed.Hybrid mapping disturbance is introduced into suboptimal individuals,which causes variation within a small range,improving the population diversity and expanding the search scope.Besides,a new growth mode is constructed and a global optimal intra-group guidance strategy is proposed to make the algorithm approach the global optimal solution faster,enhancing the local search ability and accelerating the convergence speed.Simulation results show that the convergence speed of the ICOA is significantly faster than that of the COA.Compared with the genetic algorithm,cuckoo algorithm,biogeography-based optimization algorithm and Taguchi algorithm,the ICOA obtains better SLL and null steering,which proves the effectiveness and superiority of the proposed algorithm.

    Method for the analysis of text sentiment based on the word dual-channel network
    LI Yuan,CUI Yushuang,WANG Wei
    Journal of Xidian University. 2021, 48(6):  179-186.  doi:10.19665/j.issn1001-2400.2021.06.022
    Abstract ( 267 )   HTML ( 11 )   PDF (1452KB) ( 45 )   Save
    Figures and Tables | References | Related Articles | Metrics

    A new two-channel sentiment analysis method,C-A-BiLSTM,is proposed to solve the problems that the traditional sentiment analysis method has a low accuracy and cannot fully extract text feature information.The model performs convolution operations on two different channels in different directions of word vectors and Word-POS word vectors to mine deeper semantic information,in which the word vector channel extracts more semantic local information and effectively alleviates the problem of unlisted words in the thesaurus.The word vector channel uses the part of speech tagging technology to obtain the part of speech of the corresponding word,which solves the problem of polysemy of one word faced by the original word vector.The combination of the two channels can efficiently mine deeper semantic and grammatical information,but it is unable to filter the key information from the text tensor,which consumes a lot of computational power.Therefore,the attention mechanism is introduced,on the basis of which the A-BiLSTM network combined with the Attention mechanism is used to further extract context information and to gain more comprehensive and high-quality features.Experimental achievements indicate that the accuracy,recall and F1 values which the model has reached all exceed 94%,which is notably enhanced in comparison with the CNN algorithm,SVM and BiLSTM algorithm,and that the error rate is reduced by about 1%~6%.The method has a certain advantage in text analysis tasks.

    Fast hyper-chaotic image encryption algorithm using vector operation
    GE Bin,CHEN Xu,CHEN Gang
    Journal of Xidian University. 2021, 48(6):  187-196.  doi:10.19665/j.issn1001-2400.2021.06.023
    Abstract ( 166 )   HTML ( 66 )   PDF (3069KB) ( 30 )   Save
    Figures and Tables | References | Related Articles | Metrics

    This paper present a novel vectored diffusion structure to overcome the low efficiency of most hyper-chaotic image encryption algorithms.First,the hash function and true random numbers are employed in generating the session key which can enhance plaintext sensitivity.Then,the original hyper-chaotic numbers are quantified to the range of 0~255 and reconstructed to get a key matrix with the same size as the image.Finally,the association between plain image and cipher image is quickly and fully confused through four round parallel diffusion processes using vector operation.Experimental results and analysis show that the diffusion process can fast encrypt an image due to the fact that its time complexity is only O(M+N),and that its security is enough to resist the common cryptanalysis such as brute force attacks,statistical attack,and chosen-plaintext attacks.The results indicate that the proposed algorithm can be widely used in real-time and big data secure communication occasions.

    Algorithm for sorting key nodes based on the fusion of local characteristics and global environment
    WANG Qiuling,HE Liaoliao,XU Hong,WEI Zi'ang,KE Yuhao,GUAN Wenying,ZHU Zhangyuan
    Journal of Xidian University. 2021, 48(6):  197-203.  doi:10.19665/j.issn1001-2400.2021.06.024
    Abstract ( 197 )   HTML ( 11 )   PDF (974KB) ( 41 )   Save
    Figures and Tables | References | Related Articles | Metrics

    It is very important to identify the key nodes in complex networks for analyzing the network structure and controlling the propagation process.The classic algorithm for key node recognition has been extended and applied to the analysis of various real networks.While various methods have their respective advantages,they also have certain shortcomings.In view of the fact that most of the existing key node recognition algorithms in the transportation network do not take into account the local characteristics of the nodes and the global environment of the network,this paper proposes an improved Page Rank algorithm (ST-Page Rank) based on the "structure hole".By analyzing the position of nodes in the global network and the topological structure relationship between neighbor nodes,this method uses the importance index of structural holes to characterize the passenger flow contribution relationship between adjacent nodes in the traffic network,which overcomes the shortcomings of the average distribution in the Page Rank algorithm and deficiencies of the global network attributes in the structural holes,and combines the advantages of "structural holes" and "Page Rank".Simulation experimental part selects the real American aviation network,uses the SIR propagation model and Kendall correlation coefficient to evaluate,and decomposes ST-Page Rank with degree centrality,closeness centrality,betweenness centrality,eigenvector centrality,and K-shell decomposition.Comparing the recognition results of the method,Page Rank algorithm and structural hole constraint coefficients,the experimental results show that the algorithm proposed in this paper can identify key nodes in the transportation network reasonably and effectively,and is of certain theoretical and practical significance.

    Spatially adaptive EPLL denoising for low-frequency seismic random noise
    LIN Hongbo,MA Yang
    Journal of Xidian University. 2021, 48(6):  204-211.  doi:10.19665/j.issn1001-2400.2021.06.025
    Abstract ( 243 )   HTML ( 8 )   PDF (2861KB) ( 38 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The expected patch log likelihood (EPLL) frame utilizes a Gaussian mixture model (GMM) learned from external data as signal priors.The EPLL denoises the image patches via their most likely Gaussian component in the GMM and weighted-average the denoised patches and the noisy image to reconstruct the denoised image,leading to asuccessful denoising performance for random noise in the seismic image.Since the regularization parameter is only associated with the noise variance,it is difficult to achieve the balance between weak signal preservation and noise suppression for the desert seismic images containing non-strationary seismic signals and low-frequency colored noise.However,the EPLL is unadaptable to the non-stationary seismic signals in desert seismic images.A spatially adaptive EPLL (SA-EPLL) algorithm is proposed in this paper under the framework of the EPLL.In this method,we stabilize the seismic image with the variance normalization method and construct the patch signal-to-noise ratio (P-SNR) related regularization parameter,so that it can be adaptively adjusted with the spatiotemporally various intensity of the non-stationary seismic signals,allowing the balance between the preservation of local details and the restoration of global features of the non-stationary signals.In addition,in the signal reconstruction process,the P-SNR is used as the weight to weighted-average the denoised image patches,leading to a better denoising performance in less signal loss.The SA-EPLL algorithm is applied to synthetic and field seismic images,with the results showing that the proposed method can effectively restore non-stationary signals and suppress low-frequency random noise with weak similarity in desert seismic images.

    Private information retrieval with low encoding/decoding complexity
    DAI Mingjun,LI Xiaofeng,DENG Haiyan,CHEN Bin
    Journal of Xidian University. 2021, 48(6):  212-220.  doi:10.19665/j.issn1001-2400.2021.06.026
    Abstract ( 137 )   HTML ( 14 )   PDF (2002KB) ( 25 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Most of current private information retrieval methods are based on linear combination operations,and these methods have a high computational complexity.To solve this problem,this paper proposes a private information retrieval scheme based on CP-BZD (combination property,binary zigzag decoding) code and zigzag decoding.In the (n,k) CP-BZD system,k original source packets are encoded into nk packets,where arbitrary k packets are able to recover the original k source packets,and the decoding can use the zigzag decoding algorithm.The user wants to download a file from the distributed storage system.The file is encoded by CP-BZD code and stored on the node.There are no nodes colluding with each other in the system.The operation of the proposed scheme is based on the binary shift and addition operation,whose computational complexity is lower than that of linear multiplication and matrix inversion.The communication cost has reached the lowest threshold proposed in the existing papers.This scheme is suitable for an arbitrary (n,k) system,and the computational complexity and communication cost are low.The scheme is of great significance in private information retrieval.