Loading...
Office

Table of Content

    20 February 2022 Volume 49 Issue 1
      
    Special Issue on Privacy Computing and Data Security
    Blockchain data sharing scheme supporting attribute and proxy re-encryption
    LI Xuelian,ZHANG Xiachuan,GAO Juntao,XIANG Dengmei
    Journal of Xidian University. 2022, 49(1):  1-16.  doi:10.19665/j.issn1001-2400.2022.01.001
    Abstract ( 940 )   HTML ( 196 )   PDF (2744KB) ( 440 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The high value and sensitivity of medical data lead to the problems of access control,data security,effective supervision and privacy leakage in electronic medical data sharing.The traditional attribute-based encryption can solve one-to-many access control problems during data sharing,but there are still challenges that need to be solved,such as low efficiency,the invalidation of access policy once it changes slightly,and the leakage of sensitive information from the access policy.To solve the above problems,first,a scheme using the attribute-based encryption with the hidden access policy and proxy re-encryption is proposed,which can prevent privacy from being disclosed by the access policy,but also realizes more efficient and dynamic data sharing.Second,as for the issues of the centralized single point of failure,the lack of supervision in the process of data sharing,and the heavy storage load of blockchain,the scheme is integrated with the blockchain,smart contract and InterPlanetary FileSystem,and it can implement the low-overhead mode of the distributed storage of original data ciphertext off the chain and the sharing of the key information ciphertext on the chain.Then an architecture that supports flexible data supervision is established,which is suitable for decentralized medical data sharing scenarios.Finally,for the proposed scheme,the security proof and performance analysis including storage,computing and smart contract costs are conducted.The results show that the scheme can resist selective plaintext attack and collusion attack.In addition,privacy protection and effective supervision are added in the data sharing process,and at the same time,the efficiency of the proposed scheme is better than that of the existing data sharing schemes.

    Protection of privacy of the weighted social network under differential privacy
    XU Hua,TIAN Youliang
    Journal of Xidian University. 2022, 49(1):  17-25.  doi:10.19665/j.issn1001-2400.2022.01.002
    Abstract ( 442 )   HTML ( 36 )   PDF (917KB) ( 123 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Due to randomness of noise and complexity of weighted social networks,traditional privacy protection methods cannot balance both data privacy and utility issues in social networks.This paper addresses these problems,combines histogram statistics and non-interactive differential privacy query model,and proposes a statistical releasing method for the histogram of weighted-edges in social networks.This method regards the statistical histogram of weighted-edges as the query result and designs the low-sensitivity Laplace noise random perturbation algorithm,which realizes the differential privacy protection of social relations.In order to reduce errors,the community structure entropy is introduced to divide the user nodes of the social network into several sub-communities,with the improved stochastic perturbation algorithm proposed.The social relationship is divided by community as a unit and Laplace noise is injected,so that each sequence of community satisfies the differential privacy with the social relationship protected from the community level.In addition,the characteristics of one-dimensional structural entropy are used to measure the overall privacy protection degree of the algorithm with respect to the weighted social network.By theoretical analysis and experimental results,the privacy protection algorithm proposed in this paper has a higher protection degree than the comparison algorithm for node degree identification,which achieves a better privacy protection effect.Meanwhile,it can meet the requirements of differential privacy in large social networks and maintain a high data utility of the weighted social network.

    Top-k multi-keyword ciphertext retrieval scheme supporting attribute revocation
    WANG Kaiwen,WANG Shulan,WANG Haiyan,DING Yong
    Journal of Xidian University. 2022, 49(1):  26-34.  doi:10.19665/j.issn1001-2400.2022.01.003
    Abstract ( 328 )   HTML ( 25 )   PDF (1225KB) ( 76 )   Save
    Figures and Tables | References | Related Articles | Metrics

    In recent years,the promotion of cloud computing has led to the great development of searchable encryption technology.However,most of the existing searchable encryption technologies generally only support single-keyword search and traversal ciphertext retrieval.The search process cannot be filtered and retrieved according to user needs,and does not support attribute revocation,which will return a large amount of irrelevant data and affect search users' experience.In order to solve the above problems,a top-k multi-keyword ciphertext retrieval scheme for supporting attribute revocation is proposed.On the basis of supporting multi-keyword retrieval,through the access strategy and semantic model of the attribute-based algorithm,the index table of attribute-file set mapping is constructed to realize fine-grained access control and authority management of the ciphertext,and supports top-k sorting and efficient retrieval.Homomorphic encrypted fuzzy data parameters are introduced to ensure encrypted data privacy and multi-user attribute authorization.Multi-layer data compression reduces the storage overhead,and can achieve attribute revocation with a low computational overhead.Theoretical analysis shows that the scheme has forward and backward security and keyword hiding,and can resist collusion attacks.The function and experimental evaluation of similar schemes have been carried out.The results prove that the scheme has a better comprehensive performance in terms of function and efficiency.

    Support dynamic and verifiable scheme for ciphertext retrieval
    DU Ruizhong,WANG Yi,TIAN Junfeng
    Journal of Xidian University. 2022, 49(1):  35-46.  doi:10.19665/j.issn1001-2400.2022.01.004
    Abstract ( 345 )   HTML ( 29 )   PDF (1507KB) ( 97 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the problem of privacy leakage caused by the lack of correctness verification of search results and data update,this paper proposes a Support dynamic and verifiable scheme for ciphertext retrieval.First,the AMAC is generated according to the index,the index and the AMAC are encrypted and uploaded to the blockchain,and the search results are returned to the user through the smart contract to solve the problem of incorrect results returned by the malicious server.Second,the version pointer is introduced to point to the update state,so that the trapdoor generated by the keyword in each update state is different,so as to ensure that no information is leaked when the data is updated.And this paper cleverly uses Ethereum's own characteristics to match the EOA in Ethereum with the public key,encrypt the authorization information and send the transaction,and realize the authorization access control of the data owner to the user.Finally,the security analysis shows that this scheme not only satisfies the self-adaptive security,but also meets the forward and backward security,and can well protect the security of encrypted data.Experimental results show that this solution reduces index generation and verification time,and is highly efficient in search.

    Efficient cloud storage data auditing scheme without bilinear pairing
    YANG Haibin,LI Ruifeng,YI Zhengge,NIU Ke,YANG Xiaoyuan
    Journal of Xidian University. 2022, 49(1):  47-54.  doi:10.19665/j.issn1001-2400.2022.01.005
    Abstract ( 254 )   HTML ( 20 )   PDF (986KB) ( 49 )   Save
    Figures and Tables | References | Related Articles | Metrics

    In the existing cloud storage data integrity auditing schemes,only a few tags participate in integrity verification,with most of the data tags idle,which causes the waste of computing and storage resources.To solve this problem,this paper constructs an efficient cloud storage data auditing scheme without bilinear pairs.The scheme uses the Schnorr Signature Algorithm to generate labels only for the audited data blocks,which reduces the user's computing overhead.It can efficiently complete dynamic updates to the data.In the challenge phase,blockchain technology is used to generate challenge parameters by using the timestamp to ensure the randomness of challenge parameters.The cloud service provider and third-party auditor do not need to interact,which reduces the communication overhead.In the whole auditing phase,the scheme avoids large overhead operations such as bilinear mapping,power exponent,point mapping hash function and so on.The security analysis shows that the scheme is safe and effective,and that it can resist forgery attacks and replay attacks from cloud service providers and protect the privacy of data and private key.In the efficiency analysis part,numerical analysis and experimental analysis show that the scheme has a higher auditing efficiency and dynamic update efficiency compared with the existing cloud auditing schemes.Moreover,with the increase of the number of data blocks and challenge blocks,its advantages are more obvious.

    Multi-keyword search encryption scheme supporting flexible access control
    YAN Xixi,ZHAO Qiang,TANG Yongli,LI Yingying,LI Jingran
    Journal of Xidian University. 2022, 49(1):  55-66.  doi:10.19665/j.issn1001-2400.2022.01.006
    Abstract ( 429 )   HTML ( 45 )   PDF (1164KB) ( 76 )   Save
    Figures and Tables | References | Related Articles | Metrics

    In most searchable encryption schemes,the cloud server would compare the trapdoor with all secure indexes in the database during the search operation,which will cause excessive overhead.To address this problem,an efficient multi-keyword search encryption scheme supporting flexible access control is proposed.Before the sensitive data is encrypted and uploaded to the cloud server,it is clustered using the k-means to get several segmentation clusters,each of which would be given a different index through the Latent Dirichlet Allocation.In the search phase,the cloud server finds the cluster with the highest correlation through the Jaccard distance between the key set in the trapdoor and each cluster index,and searches the matched clusters in order to reduce the comparisons between the trapdoor and the index.And then the cloud server obtains the file list using the B+ tree-based data structure to improve the search efficiency.In addition,the scheme achieves encrypted file sharing by combining the broadcast encryption mechanism,which allows users to search for keywords in the authorized file subset,and takes the keyword set of each cluster as the user access rights.The performance comparison and experimental analysis show that a constant size of the user private key would be provided,and the communication cost and storage cost are independent of the number of authorized users,with the precision of search reaching about 90%.

    Method for the protection of spatiotemporal correlation location privacy with semantic information
    ZUO Kaizhong,LIU Rui,ZHAO Jun,CHEN Zhangyi,CHEN Fulong
    Journal of Xidian University. 2022, 49(1):  67-77.  doi:10.19665/j.issn1001-2400.2022.01.007
    Abstract ( 388 )   HTML ( 31 )   PDF (2014KB) ( 85 )   Save
    Figures and Tables | References | Related Articles | Metrics

    With the rapid development of communication network technology,the rapid improvement of intelligent electronic device functions and the rapid advancement of positioning technology,location-based services make people's daily lives more convenient.However,users' privacy information about their locations is facing threats that cannot be ignored.The existing continuous query-oriented location privacy protection methods ignore the semantic information contained in the user's movement trajectories,causing attackers to use that information to mine the user's behavior habits,personal preferences and other privacies.At the same time,the traditional fake trajectory privacy protection methods generate multiple fake trajectories to confuse the user's real trajectory,but the transfer of semantic location points in the fake trajectory does not conform to the user's behavior rules,so we propose a spatiotemporal correlation location privacy protection method with semantic information.This method combines the historical semantic trajectories of the user with the semantic information on the location to construct a user behavior model,and constructs false trajectories that conform to the rules of user behavior according to the transition probability and temporal-spatial correlation between semantic locations at adjacent moments in the model,thus achieving the purpose of confusing the real trajectory of the user.Finally,based on the real data set,the algorithm is compared with the existing algorithms,which shows that the algorithm can effectively reduce the risk of location privacy leakage in continuous query scenarios when the attacker has relevant background knowledge.

    Research progress and applications of cryptographic accumulators
    MIAO Meixia,WU Panru,WANG Yunling
    Journal of Xidian University. 2022, 49(1):  78-91.  doi:10.19665/j.issn1001-2400.2022.01.008
    Abstract ( 1342 )   HTML ( 97 )   PDF (1219KB) ( 385 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Cryptographic accumulators can accumulate all the elements in a set and efficiently give the (non)membership proof of any element,that is,to prove whether an element exists in the set.Cryptographic accumulators are mainly divided into three types:static accumulators,dynamic accumulators and universal accumulators.Specifically,static accumulators aim at accumulating the elements in the static set;dynamic accumulators further allow the dynamic addition and deletion of elements from the accumulation set;universal accumulators support both membership proof and non-membership proof (elements are not in the set).For the above different types of cryptographic accumulators,many scholars have given specific structures based on different cryptographic tools,which can be divided into RSA based cryptographic accumulator,bilinear mapping based cryptographic accumulator and Merkle hash tree based cryptographic accumulator.Cryptographic accumulators have a wide range of application scenarios,such as group signature,ring signature,anonymous certificate,timestamp,outsourced data verification and so on.In recent years,cryptographic accumulators have been applied to the blockchain to solve the problem of high storage overhead.This paper first classifies,analyzes and summarizes the existing scheme from the aspects of the construction scheme and function application of the cryptographic accumulators,then introduces the main application scenarios of the cryptographic accumulators,and finally points out some problems faced by the existing scheme,as well as the future development trend and research direction.

    Low computation-complexity multi-scalar multiplication algorithm for the ECDSA
    HUANG Hai,NA Ning,LIU Zhiwei,YU Bin,ZHAO Shilei
    Journal of Xidian University. 2022, 49(1):  92-101.  doi:10.19665/j.issn1001-2400.2022.01.009
    Abstract ( 269 )   HTML ( 18 )   PDF (1541KB) ( 80 )   Save
    Figures and Tables | References | Related Articles | Metrics

    With the rapid development of E-commerce,the importance of information security is increasing.Cryptography can ensure the safety,confidentiality,integrity and no tampering of data in the process of communication.Digital signature algorithms such as Elliptic Curve Digital Signature Algorithm (ECDSA) provide key technologies for secure e-commerce.But,the de-sign architecture of the ECDSA adopts different algorithms for multi-scalar multiplication and algorithms for single-scalar multiplication to separately execute operation,which will increase the computational complexity generally.The algorithm uses the fetching-mode method to construct the JDBC,fetching-mode operation for the part that is indivisible by the base,and at the same time,and pre-computes the obtained remainder.Compared with the greedy method used in the existing JDBC,the length of the produced base chain is reduced,and the computational complexity of the multi-scalar multiplication method is significantly reduced.Experimental results indicate that the algorithm for multi-scalar multiplication of low complexity reduces complexities by 9.84%~30.75% and by 3.88%~26.81% in terms of multi-scalar multiplication and single-scalar multiplication under the curve-P256 curve.In addition,this algorithm also reduces a complexity of 16.65% in joint processing,and thus it is estimated that the counting point of this algorithm is reduced by 25.00% when compared with the wNAF and JDBC.The running speed increases at least by 14.80% compared with those of the current algorithms by building a model through Python.

    New class of complete permutation monomials over finite fields
    HUANG Mengmeng,WU Gaofei
    Journal of Xidian University. 2022, 49(1):  102-110.  doi:10.19665/j.issn1001-2400.2022.01.010
    Abstract ( 180 )   HTML ( 15 )   PDF (585KB) ( 58 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Complete permutation polynomials (CPPs) over finite fields have important applications in cryptography,coding theory,and combinatorial design theory.The block cipher algorithm SMS4 published in China in 2006 is designed based on CPPs.Recently,CPPs have been used in the constructions of cryptographic functions.Thus,the construction of CPPs over finite fields has become a hot research topic in cryptography.CPPs with few terms,especially monomial CPPs over finite fields,attract people's attention due to their simple algebraic form and easy realization.In this paper,a detailed survey of the constructions of monomial CPPs is presented.Then we give a class of monomial CPPs over finite fields with an odd characteristic by using a powerful criterion for permutation polynomials.Our construction enriches the results of monomial CPPs.In addition,we also calculate the inverses of these bijective monomials.

    Analysis of the mean difference of intermediate-values in a white box SM4
    ZHANG Yueyu,XU Dong,CAI Zhiqiang,CHEN Jie
    Journal of Xidian University. 2022, 49(1):  111-120.  doi:10.19665/j.issn1001-2400.2022.01.011
    Abstract ( 383 )   HTML ( 85 )   PDF (2341KB) ( 63 )   Save
    Figures and Tables | References | Related Articles | Metrics

    In the white box attack context,the attacker has full access to the cryptographic system.In order to ensure the key security in the white box attack context,the concept of white-box cryptography is proposed.In 2016,BOS et al.proposed the differential computation analysis (DCA) by introducing the idea of side channel analysis into white-box cryptography for the first time,creating a new path of white box cryptography analysis.DCA takes the software execution trace in the running process of the white-box cryptography program as the analytical object,and uses the statistical analysis method to extract the key.Whether to master the design details of the white-box cryptography or not has little impact on the analysis.The white-box SM4 is the cryptographic implementation of the commercial cryptographic standard algorithm SM4 under the white-box security model.In order to evaluate the security of the white-box SM4 efficiently,a side channel analytical method is proposed for white-box SM4 implementation based on the research on the DCA,called Intermediate-values Mean Difference Analysis (IVMDA).IVMDA directly uses the intermediate value in the process of encryption for analysis,and uses linear combination to counteract the confusion of the white-box SM4.With the participation of at least 60 random plaintexts,the first round key can be completely extracted in about 8 minutes.Compared with the existing analytical methods,this method has the characteristics of convenient deployment,suitability for practical application environment and high analytical efficiency.

    A differential fault attack of fruit v2 and fruit 80
    QIAO Qinglan,DONG Lihua
    Journal of Xidian University. 2022, 49(1):  121-133.  doi:10.19665/j.issn1001-2400.2022.01.012
    Abstract ( 268 )   HTML ( 18 )   PDF (1372KB) ( 39 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Based on lightweight stream cipher Sprout,small state stream cipher such as Fruit v2,Fruit-80,Fruit-128 and Fruit-F have been proposed since 2016.The difference between Fruit and Sprout is that the round key that participates in the internal state update in Fruit does not involve the internal state of NFSR and LFSR,which makes it more difficult to recover the key of Fruit than Sprout.In this paper,based on Maitra's differential fault attack on Sprout and Banik's differential fault attack on Grain,we will describe a differential fault attack(DFA) on Fruit v2 and Fruit-80 under the most relaxed of assumption.We assume that the attacker can inject multiple,time-synchronized,single bit-flipping faults in the same albeit random register location.e first accurately identify the location of the fault injection,and then according to the affine property of the output function,we formulate a sufficient number of linear equations to recover the whole internal state of the cipher.The results show that the time complexity required to determine the internal state of Fruit v2 and Fruit-80 is 216.3 (LFSR) and 26.3 (NFSR).In the part of key recovery,with the help of cryptomanisat-2.9.5 SAT solver,all the equations can be solved in about 10 minutes.According to the statistics,the number of fault needed to attack is 27.3.The complexity of identifying the correct fault location is 26.3 (Fruit v2) and 27.3 (Fruit-80),respectively.

    Information and Communications Engineering
    Algebraic method for constructing Raptor-like multi-rate QC-LDPC codes
    LI Hua'an,BAI Baoming,XU Hengzhou,CHEN Chao
    Journal of Xidian University. 2022, 49(1):  134-141.  doi:10.19665/j.issn1001-2400.2022.01.013
    Abstract ( 209 )   HTML ( 13 )   PDF (1687KB) ( 37 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Variable-rate low-density parity-check (LDPC) codes are one class of codes with various code rates and play an important role in most communication systems.Two representatives of such codes are multi-rate LDPC (MR-LDPC) codes with a constant codeword length and rate-compatible LDPC codes with a constant information length.Combining algebraic and superposition construction methods,this paper studies the design and construction of the Raptor-like multi-rate quasi-cyclic LDPC codes by progressively adjusting the lifting sizes for different code rates.Based on the proposed method,with the decrease in code rate,the sizes of the base matrix and exponent matrix of the constructed codes become large while the lifting sizes are reduced.Besides,to achieve a constant codeword length and various information lengths,both shortening of information bits and puncturing of parity bits are considered.Resulting codes simultaneously own quasi-cyclic and Raptor-like structures,so that the corresponding encoder/decoder can be easily implemented by hardware and the encoding can also be done directly based on the parity-check matrix.Moreover,exponent matrices of the constructed codes have a specific algebraic structure,and thus the storage complexity is very low.Numerical results show that,compared to some standard codes,e.g.,WiMAX LDPC codes,the constructed codes can obtain a better overall performance,which can provide a promising scheme for the coding method fusion design of the future ground network and other communication systems.

    Preference aware participant selection strategy for edge-cloud collaborative crowdsensing
    WANG Ruyan,LIU Jia,HE Peng,CUI Yaping
    Journal of Xidian University. 2022, 49(1):  142-151.  doi:10.19665/j.issn1001-2400.2022.01.014
    Abstract ( 253 )   HTML ( 10 )   PDF (1614KB) ( 45 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Crowdsensing relies on the mobility of a large number of users and the sensing ability of intelligent devices to complete data collection,which has become an effective way of sensing data collection.In the existing crowdsensing network,the cloud platform is responsible for the whole process of task distribution and data collection,so it is difficult to effectively process a large amount of real-time data and the sensing cost is high.Different participants have different interests in tasks.Ignoring preference factors will lead to a low efficiency and poor satisfaction of selected participants.Aiming at the problems existing in the above crowdsensing network,a preference aware participant selection strategy under the edge-cloud collaborative architecture is proposed.The participant selection process is performed by the cloud platform and edge nodes in collaboration.The cloud platform distributes tasks to edge nodes based on different locations of tasks,and collects data from edge nodes.The edge node is responsible for the participant selection process,quantifying the user's preference for the task by evaluating the time matching degree,distance matching degree,task type and reward,and quantifying the user's preference for the task by evaluating the user's reputation and sensing cost.Based on the bilateral preference and stable matching theory,the participant selection problem is modeled as a many-to-one stable matching problem between users and tasks,with the stable matching solved to maximize the participant preference.The results show that the participants selected by this strategy have high satisfaction and that the data quality collected by the platform is good.

    Noveltask offloading solutions based on immune optimization inmobile edge computing
    ZHU Sifeng,SUN Enlin,CHAI Zhengyi
    Journal of Xidian University. 2022, 49(1):  152-160.  doi:10.19665/j.issn1001-2400.2022.01.015
    Abstract ( 254 )   HTML ( 14 )   PDF (1165KB) ( 51 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Mobile edge computing can reduce the task computing delay and energy consumption of mobile terminals by sinking computing resources and storage resources to the edge of the mobile network,so as to effectively meet the requirements of high return bandwidth and low delay required by the rapid development of the mobile Internet and Internet of things.As a major advantage of mobile edge computing,the computing offload improves the mobile service capability by migrating heavy computing tasks to the edge server.In this paper,a task offloading solutions to minimize system response delay and mobile terminal energy consumption is proposed for mobile terminal applications with low latency and low energy consumption in mobile edge computing scenarios.First,based on the comprehensive consideration of the delay and energy consumption of mobile terminal execution tasks,the task slicing model,time delay model,energy consumption model and target optimization model are constructed;second,an improved immune optimization algorithm and a task offloading solution based on immune optimization are proposed;finally,the proposed solutions are compared with the LOCAL Execution solutions and the offloading solutions based on the genetic algorithm.Simulation results show that the proposed scheme is better than the literature scheme in terms of the comprehensive cost of delay and energy consumption,and can meet the unloading requirements of mobile terminal applications with low delay and low energy consumption.

    Research on the spectrum defragmentation algorithm for the elastic optical network based on MCDM
    WANG Jingyu,RAN Jinzhi,WANG Ping
    Journal of Xidian University. 2022, 49(1):  161-172.  doi:10.19665/j.issn1001-2400.2022.01.016
    Abstract ( 268 )   HTML ( 11 )   PDF (1315KB) ( 25 )   Save
    Figures and Tables | References | Related Articles | Metrics

    To solve the problem of the increased service request blocking rate and bandwidth blocking rate caused by the fragmentation of the elastic optical network spectrum,the causes of spectrum fragmentation is analyzed in detail.According to the characteristics of optical network carrying services,an elastic optical network fragmentation algorithm based on multi-criteria decision-making is proposed from the perspective of improving spectrum utilization.The algorithm uses the multi-criteria decision-making method to deal with the selectivity problems encountered in the process of defragmentation,and makes decisions by comprehensively considering various evaluation indexes,so as to defragment spectrum fragments.In the traffic routing stage,the algorithm is divided into five stages.In each stage,according to the current state of the optical network,the best decision is made to sort out the spectrum fragments.Each stage marks different types of connections with different labels,and judges them according to the weights set by the multi-criteria decision-making method.Finally,the best scheme is adopted to achieve the best defragmentation effect.Simulation verification is carried out through specific examples,with the results showing that the proposed algorithm has a lower bandwidth blocking rate (36% blocking rate under high load) and high spectrum utilization (up to 65% under high load),which can effectively improve the network request blocking rate under high network load conditions,and provide a theoretical reference for the processing of the spectrum fragmentation of the elastic optical network under actual conditions.

    Method for online reconstruction of marine monitoring data with sequential compressed sensing
    LIU Ge,RUI Guosheng,TIAN Wenbiao,TIAN Runlan,WANG Xiaofeng
    Journal of Xidian University. 2022, 49(1):  173-180.  doi:10.19665/j.issn1001-2400.2022.01.017
    Abstract ( 222 )   HTML ( 14 )   PDF (1682KB) ( 44 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The evaporation duct is a kind of electromagnetic wave transmission medium which occurs randomly in the near sea level atmosphere.It is an important part of complex electromagnetic environment in the naval battle field.At present,the method for obtaining the characteristic parameters of the evaporation duct is usually to substitute a variety of marine monitoring data (MMD) into a specific calculation model.These meteorological elements usually include atmospheric temperature,humidity,wind speed,sea surface temperature,etc.In order to obtain the distribution of the evaporation duct in a large area over a long period of time,it is necessary to observe MMD continuously for a long time.Aiming at the problem of poor reconstruction performance of traditional compressed sensing methods in reconstructing time-varying MMD,an online reconstruction method for MMD based on sequential compressed sensing with low-rank regularization is proposed.The algorithm first analyzes the real MMD,revealing the low-rank of the data in the spatial structure,and then constructs the low-rank regularization term combined with the existing historical data with the data fidelity item established according to the condition that the data in the overlapping area at the front and back are equal.Finally the reconstruction optimization algorithm is solved based on the alternating direction multiplier method.Theoretically,the effectiveness of the algorithm is proved by the convergence analysis and complexity analysis,and experimental results verify that the algorithm has a higher reconstruction accuracy.

    Parameter estimation of the near-field source using the PCA-BPalgorithm with the array error
    WANG Le,ZHAO Peiyao,WANG Lanmei,WANG Guibao
    Journal of Xidian University. 2022, 49(1):  181-187.  doi:10.19665/j.issn1001-2400.2022.01.018
    Abstract ( 184 )   HTML ( 15 )   PDF (1827KB) ( 29 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The steering vector of the array will be biased when there is an error in the signal receiving array,which will affect the performance of the parameter estimation algorithm.In order to reduce the influence of the array error on the parameter estimation results and reduce the computational complexity,a combination of intelligent algorithms and principal component analysis is used.First,in order to avoid the tedious process of error modeling,the back propagation neural network method is used to include errors and other factors in the network model.Second,it takes too long and is quite complicated for the back propagation neural network to train the near-field source parameter estimation model.In order to shorten the training time and reduce the amount of calculation,the principal component analysis method is introduced in the back propagation neural network model to reduce the dimension of the signal feature matrix.Then the reduced-dimensional signal feature matrix is used as the input feature of the back propagation neural network,and the near-field source parameters are used as the expected output for training,so as to simplify the network structure and shorten the training time.Finally,the received data containing signal information to be estimated is input into the trained network model to obtain the estimated value of the signal incident direction.This algorithm can accurately estimate the parameters of the near-field source in the presence of errors in the receiving array,and improve the estimation performance of the near-field source signal parameters under a low signal-to-noise ratio.Simulation experimental results show the effectiveness of the algorithm in this paper.

    Information and Communications Engineeringe
    Efficient implementation of unconditionally stable FDTD with the local eigenvalue solution
    ZHAO Sihan,WEI Bing,HE Xinbo
    Journal of Xidian University. 2022, 49(1):  188-193.  doi:10.19665/j.issn1001-2400.2022.01.019
    Abstract ( 227 )   HTML ( 11 )   PDF (1239KB) ( 47 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Due to the explicit and unconditionally stable finite difference time domain (US-FDTD) method has a high computational cost of solving eigenvalues of the global matrix and field iteration when there are a large number of unknown fields or unstable modes.To solve this problem,an efficient implementation scheme of US-FDTD based on the local eigenvalue solution (USL-FDTD) is given.All unstable modes in the entire system can be obtained accurately and efficiently without solving the eigenvalue problem of the global matrix by this scheme.In the implementation,first,the computational domain is divided into two parts.Region I contains all fine grids and the adjacent coarse grids.Region Ⅱ consists of the remaining coarse grids.Then the original global system matrix can be divided naturally into four local matrix blocks.These four small matrices contain the grid information on region I and region Ⅱ respectively,and the coupling relationship between region I and region Ⅱ.Since the unstable modes only exist in fine grids and the adjacent coarse grids,all unstable modes can be obtained by solving the eigenvalue problem of the local matrix corresponding to region I.Finally,the fields in region I and region Ⅱ can be calculated respectively.The fields of these two regions are associated by two coupling matrix blocks.In addition,there is no unstable mode in coupling matrix blocks.USL-FDTD not only decreases the dimension of the matrix to be solved,but also reduces the computational complexity and improves the computational efficiency.Numerical results show the accuracy and efficiency of this implementation.

    Information and Communications Engineering
    Wide-range and high-accuracy four-phase DLL with the adaptive-bandwidth scheme
    YANG Xue,LIU Fei,HUO Zongliang
    Journal of Xidian University. 2022, 49(1):  194-201.  doi:10.19665/j.issn1001-2400.2022.01.020
    Abstract ( 246 )   HTML ( 23 )   PDF (1385KB) ( 55 )   Save
    Figures and Tables | References | Related Articles | Metrics

    This paper presents a four-phase output delay locked loop (DLL) with one adaptive bandwidth delay chain structure,which is suitable for the clock generation of the NAND Flash high-speed interface circuit meeting the ONFI 4.2 international protocol standard.In order to solve the problem of the limited delay time of the traditional delay chain in a wide frequency range,a configurable delay chain circuit structure is proposed,which can select the appropriated delay units in different frequency bands so that the operating frequency range of DLL is extended and the lock accuracy is maintained.In addition,an adaptive control circuit based on the frequency detector is proposed,which can track the input clock frequency,automatically configure the delay chain,and realize the adaptive bandwidth of the DLL.In the SMIC 28nm CMOS process,the DLL circuit is designed.Simulation results show that the locking range of the DLL is [22 MHz,1.6 GHz] with the maximum locking accuracy being 17 ps at the 25℃/0.9 V power supply and typical process corner.

    Algorithm for gradient optimization of hybrid precoding based on DNN in the millimeter wave MIMO system
    WANG Yong,WANG Xiyuan,REN Zeyang
    Journal of Xidian University. 2022, 49(1):  202-207.  doi:10.19665/j.issn1001-2400.2022.01.021
    Abstract ( 264 )   HTML ( 13 )   PDF (737KB) ( 67 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Hybrid precoding of millimeter wave Multi-Input Multi-Output (MIMO) is an important method to reduce hardware complexity and energy consumption.In order to reduce the complexity of optimization processing and improve spectral efficiency,a hybrid precoding fast optimization algorithm based on deep learning is proposed.The difference in signal-to-noise ratio between subchannels may lead to a poor bit error rate performance.The hybrid precoder is selected by geometric mean decomposition (GMD) of block diagonalization and training based on the deep neural network (DNN).The optimal selection of the precoder is regarded as the mapping relationship in the DNN to optimize the hybrid precoding process of the large-scale MIMO.The optimization problem of spectral efficiency is approximately reduced to the minimization of the Euclidean distance between all digital precoders and hybrid precoders,and the throughput is improved by using a limited number of RF links.Performance analysis and simulation results show that due to the improved gradient algorithm and single cycle iterative structure,the DNN based method can minimize the bit error rate (BER) of the millimeter wave MIMO and improve the spectral efficiency,while significantly reducing the required computational complexity.When the spectral efficiency is 50bps/Hz,the SNR can be saved by 3dB.If different schemes achieve the same bit error rate,the SNR can be saved by more than 5dB and have better robustness.

    Computer Science and Technology & Artificial Intelligence
    Adaptive transmittance dehazing algorithm based on non-linear transformation
    SUN Jingrong,XIE Linchang,DU Mengxin,LUO Liyan
    Journal of Xidian University. 2022, 49(1):  208-215.  doi:10.19665/j.issn1001-2400.2022.01.022
    Abstract ( 217 )   HTML ( 13 )   PDF (2709KB) ( 40 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The rapid development of artificial intelligence has made image processing technology widely used in the new generation of intelligent transportation systems.The issue of insufficient estimation of the transmission map when the existing image dehazed algorithms are applied to Intelligent Transportation Systems leads to the color shift,artifacts,and low contrast in the sudden depth of the field area for the restored images and seriously affects the performance of the outdoor acquisition system.Therefore,this paper proposes an adaptive transmittance defogging algorithm based on a nonlinear transformation.Through the use of logarithmic transformation and adaptive parameters,the intensity value of the high gray area in a dark channel is compressed that can obtain the dark channel of the original fog-free image.And then the initial transmittance is estimated.According to the difference between pixel brightness and saturation,an adjustment factor is introduced to compensate the transmittance of the sky area.After that,by combining with guided filtering,the compensated transmittance is smoothed to obtain the adaptive optimized transmittance.Then,on the basis of the atmospheric scattering model,the dehazed results are obtained.Simulation results show that the algorithm has a clear and natural dehazed effect on the sky and in the sudden depth of field area,with rich texture details,no obvious artifact and color shift,and moderate brightness.We conduct extensive experiments to report quantitative results for comparison,such as average gradient,signal-to-noise ratio,structural similarity,and information entropy.The parameters are better than those of other linear algorithms,and each index is improved by about 6.4% on average,which can effectively alleviate the halo and distortion of the dehazed image in the sudden depth of the field area.

    Algorithmfor image inpainting in generative adversarial networks based on gated convolution
    GAO Jie,HUO Zhiyong
    Journal of Xidian University. 2022, 49(1):  216-224.  doi:10.19665/j.issn1001-2400.2022.01.023
    Abstract ( 443 )   HTML ( 14 )   PDF (2538KB) ( 80 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The inpainting algorithm for the generative adversarial networks often has the errors when filling arbitrary masked areas,because the convolution operation treats all input pixels as valid pixels.In order to solve this problem,an image inpainting algorithm based on gated convolution is proposed,which uses gated convolution to replace the traditional convolution in the residual block of the network to effectively learn the relationship between the known areas and the masked areas.The algorithm uses a two-stage generation of an adversarial inpainting network with edge repair and texture repair.First,we use the edge detection algorithm to detect the structure of the known area in the damaged image.Next,we combine the edge in the mask area with the color and texture information on the known area to repair the structure.Then,the complete structure is combined with the damaged image and sent to the texture repair network for texture repair.Finally,the complete image is output.In the process of network training,the Spectral Normalized Markovian Discriminator is used to improve the problem of slow weight change in the iterative process,thereby speeding up the convergence speed and improving the accuracy of the model.Experimental results on the Places2 dataset show that the proposed algorithm is different in peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) compared with the previous two-stage inpainting algorithm when repairing images with different shapes and sizes in damaged areas.The peak signal-to-noise ratio and structural similarity are improved by 3.8% and 3% respectively,and the subjective visual effect is significantly improved.

    Shape correspondence calculation using the unsupervised siamese functional maps network
    YANG Jun,WANG Xingxing,LU Youpeng
    Journal of Xidian University. 2022, 49(1):  225-235.  doi:10.19665/j.issn1001-2400.2022.01.024
    Abstract ( 165 )   HTML ( 82 )   PDF (1975KB) ( 33 )   Save
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the problems of incomplete feature descriptors information and unsatisfactory mapping matrix optimization when constructing the shape correspondence between non-rigid deformation 3D shapes,a novel approach is presented using the Unsupervised Siamese Deep Functional Maps Network(USDFMN) to calculate the shape correspondence.First,the source and target shapes are input to the USDFMN to learn the original 3D geometric traits,which are respectively projected to the Laplacian-Beltrami bases to get the corresponding spectral feature descriptors.Second,the spectral feature descriptors are input in the functional mapping layer to calculate the more robust correspondence where an optimal functional matrix is obtained.Third,an unsupervised learning model is employed to calculate the chamfer distance metric for designing the unsupervised loss function,which estimates the similarity between shapes and evaluates the final calculated correspondence.Finally,the function mapping matrices are restored to point-to-point correspondences using the ZoomOut algorithm.Qualitative and quantitative experimental results show that the proposed algorithm for the shape correspondence of the SURREAL and TOSCA datasets contributes to a uniform visualization in correspondence distributions and a reduction in the geodesic errors.It can not only reduce the time complexity but also improve the accuracy of the shape correspondence calculation to a certain extent.Moreover,the ability of the USDFMN to be generalized,as well as its scalability,is greatly enhanced on different datasets.

    Boundary-aware network for building extraction from remote sensing images
    ZHANG Yan,WANG Xiangyu,ZHANG Zhongwei,SUN Yemei,LIU Shudong
    Journal of Xidian University. 2022, 49(1):  236-244.  doi:10.19665/j.issn1001-2400.2022.01.025
    Abstract ( 500 )   HTML ( 24 )   PDF (2767KB) ( 117 )   Save
    Figures and Tables | References | Related Articles | Metrics

    The complexity of remote sensing images brings a great challenge for building extraction research.The introduction of deep learning improves the accuracy of building extraction from remote sensing images,but there are still some problems such as blurred boundaries,missing targets and incomplete extraction areas.To address these issues,this paper proposes a boundary-aware network for building extraction from remote sensing images,including the feature fusion network,feature enhancement network and feature refinement network.First,the feature fusion network uses the encoding-decoding structure to extract different scale features,and designs the interactive aggregation module to fuse different scale features.Then,the feature enhancement network enhances the learning of missed targets through subtraction and cascade operation to obtain more comprehensive features.Finally,the feature refinement network further refines the output of the feature enhancement network by using the encoding-decoding structure to obtain rich building boundary features.In addition,in order to make the network more stable and effective,this paper combines the binary cross-entropy loss and the structure similarity loss,and supervises the training and learning of the model on both pixel and image structure levels.Through the test on the dataset WHU,in terms of objective metrics,the IoU and Precision of this network are improved compared with other classical algorithms,reaching 96.0% and 97.9% respectively.At the same time,in terms of subjective vision,the extracted building boundary is clearer and the region is more complete.