The high value and sensitivity of medical data lead to the problems of access control,data security,effective supervision and privacy leakage in electronic medical data sharing.The traditional attribute-based encryption can solve one-to-many access control problems during data sharing,but there are still challenges that need to be solved,such as low efficiency,the invalidation of access policy once it changes slightly,and the leakage of sensitive information from the access policy.To solve the above problems,first,a scheme using the attribute-based encryption with the hidden access policy and proxy re-encryption is proposed,which can prevent privacy from being disclosed by the access policy,but also realizes more efficient and dynamic data sharing.Second,as for the issues of the centralized single point of failure,the lack of supervision in the process of data sharing,and the heavy storage load of blockchain,the scheme is integrated with the blockchain,smart contract and InterPlanetary FileSystem,and it can implement the low-overhead mode of the distributed storage of original data ciphertext off the chain and the sharing of the key information ciphertext on the chain.Then an architecture that supports flexible data supervision is established,which is suitable for decentralized medical data sharing scenarios.Finally,for the proposed scheme,the security proof and performance analysis including storage,computing and smart contract costs are conducted.The results show that the scheme can resist selective plaintext attack and collusion attack.In addition,privacy protection and effective supervision are added in the data sharing process,and at the same time,the efficiency of the proposed scheme is better than that of the existing data sharing schemes.
Due to randomness of noise and complexity of weighted social networks,traditional privacy protection methods cannot balance both data privacy and utility issues in social networks.This paper addresses these problems,combines histogram statistics and non-interactive differential privacy query model,and proposes a statistical releasing method for the histogram of weighted-edges in social networks.This method regards the statistical histogram of weighted-edges as the query result and designs the low-sensitivity Laplace noise random perturbation algorithm,which realizes the differential privacy protection of social relations.In order to reduce errors,the community structure entropy is introduced to divide the user nodes of the social network into several sub-communities,with the improved stochastic perturbation algorithm proposed.The social relationship is divided by community as a unit and Laplace noise is injected,so that each sequence of community satisfies the differential privacy with the social relationship protected from the community level.In addition,the characteristics of one-dimensional structural entropy are used to measure the overall privacy protection degree of the algorithm with respect to the weighted social network.By theoretical analysis and experimental results,the privacy protection algorithm proposed in this paper has a higher protection degree than the comparison algorithm for node degree identification,which achieves a better privacy protection effect.Meanwhile,it can meet the requirements of differential privacy in large social networks and maintain a high data utility of the weighted social network.
In recent years,the promotion of cloud computing has led to the great development of searchable encryption technology.However,most of the existing searchable encryption technologies generally only support single-keyword search and traversal ciphertext retrieval.The search process cannot be filtered and retrieved according to user needs,and does not support attribute revocation,which will return a large amount of irrelevant data and affect search users' experience.In order to solve the above problems,a top-k multi-keyword ciphertext retrieval scheme for supporting attribute revocation is proposed.On the basis of supporting multi-keyword retrieval,through the access strategy and semantic model of the attribute-based algorithm,the index table of attribute-file set mapping is constructed to realize fine-grained access control and authority management of the ciphertext,and supports top-k sorting and efficient retrieval.Homomorphic encrypted fuzzy data parameters are introduced to ensure encrypted data privacy and multi-user attribute authorization.Multi-layer data compression reduces the storage overhead,and can achieve attribute revocation with a low computational overhead.Theoretical analysis shows that the scheme has forward and backward security and keyword hiding,and can resist collusion attacks.The function and experimental evaluation of similar schemes have been carried out.The results prove that the scheme has a better comprehensive performance in terms of function and efficiency.
Aiming at the problem of privacy leakage caused by the lack of correctness verification of search results and data update,this paper proposes a Support dynamic and verifiable scheme for ciphertext retrieval.First,the AMAC is generated according to the index,the index and the AMAC are encrypted and uploaded to the blockchain,and the search results are returned to the user through the smart contract to solve the problem of incorrect results returned by the malicious server.Second,the version pointer is introduced to point to the update state,so that the trapdoor generated by the keyword in each update state is different,so as to ensure that no information is leaked when the data is updated.And this paper cleverly uses Ethereum's own characteristics to match the EOA in Ethereum with the public key,encrypt the authorization information and send the transaction,and realize the authorization access control of the data owner to the user.Finally,the security analysis shows that this scheme not only satisfies the self-adaptive security,but also meets the forward and backward security,and can well protect the security of encrypted data.Experimental results show that this solution reduces index generation and verification time,and is highly efficient in search.
In the existing cloud storage data integrity auditing schemes,only a few tags participate in integrity verification,with most of the data tags idle,which causes the waste of computing and storage resources.To solve this problem,this paper constructs an efficient cloud storage data auditing scheme without bilinear pairs.The scheme uses the Schnorr Signature Algorithm to generate labels only for the audited data blocks,which reduces the user's computing overhead.It can efficiently complete dynamic updates to the data.In the challenge phase,blockchain technology is used to generate challenge parameters by using the timestamp to ensure the randomness of challenge parameters.The cloud service provider and third-party auditor do not need to interact,which reduces the communication overhead.In the whole auditing phase,the scheme avoids large overhead operations such as bilinear mapping,power exponent,point mapping hash function and so on.The security analysis shows that the scheme is safe and effective,and that it can resist forgery attacks and replay attacks from cloud service providers and protect the privacy of data and private key.In the efficiency analysis part,numerical analysis and experimental analysis show that the scheme has a higher auditing efficiency and dynamic update efficiency compared with the existing cloud auditing schemes.Moreover,with the increase of the number of data blocks and challenge blocks,its advantages are more obvious.
In most searchable encryption schemes,the cloud server would compare the trapdoor with all secure indexes in the database during the search operation,which will cause excessive overhead.To address this problem,an efficient multi-keyword search encryption scheme supporting flexible access control is proposed.Before the sensitive data is encrypted and uploaded to the cloud server,it is clustered using the k-means to get several segmentation clusters,each of which would be given a different index through the Latent Dirichlet Allocation.In the search phase,the cloud server finds the cluster with the highest correlation through the Jaccard distance between the key set in the trapdoor and each cluster index,and searches the matched clusters in order to reduce the comparisons between the trapdoor and the index.And then the cloud server obtains the file list using the B+ tree-based data structure to improve the search efficiency.In addition,the scheme achieves encrypted file sharing by combining the broadcast encryption mechanism,which allows users to search for keywords in the authorized file subset,and takes the keyword set of each cluster as the user access rights.The performance comparison and experimental analysis show that a constant size of the user private key would be provided,and the communication cost and storage cost are independent of the number of authorized users,with the precision of search reaching about 90%.
With the rapid development of communication network technology,the rapid improvement of intelligent electronic device functions and the rapid advancement of positioning technology,location-based services make people's daily lives more convenient.However,users' privacy information about their locations is facing threats that cannot be ignored.The existing continuous query-oriented location privacy protection methods ignore the semantic information contained in the user's movement trajectories,causing attackers to use that information to mine the user's behavior habits,personal preferences and other privacies.At the same time,the traditional fake trajectory privacy protection methods generate multiple fake trajectories to confuse the user's real trajectory,but the transfer of semantic location points in the fake trajectory does not conform to the user's behavior rules,so we propose a spatiotemporal correlation location privacy protection method with semantic information.This method combines the historical semantic trajectories of the user with the semantic information on the location to construct a user behavior model,and constructs false trajectories that conform to the rules of user behavior according to the transition probability and temporal-spatial correlation between semantic locations at adjacent moments in the model,thus achieving the purpose of confusing the real trajectory of the user.Finally,based on the real data set,the algorithm is compared with the existing algorithms,which shows that the algorithm can effectively reduce the risk of location privacy leakage in continuous query scenarios when the attacker has relevant background knowledge.
Cryptographic accumulators can accumulate all the elements in a set and efficiently give the (non)membership proof of any element,that is,to prove whether an element exists in the set.Cryptographic accumulators are mainly divided into three types:static accumulators,dynamic accumulators and universal accumulators.Specifically,static accumulators aim at accumulating the elements in the static set;dynamic accumulators further allow the dynamic addition and deletion of elements from the accumulation set;universal accumulators support both membership proof and non-membership proof (elements are not in the set).For the above different types of cryptographic accumulators,many scholars have given specific structures based on different cryptographic tools,which can be divided into RSA based cryptographic accumulator,bilinear mapping based cryptographic accumulator and Merkle hash tree based cryptographic accumulator.Cryptographic accumulators have a wide range of application scenarios,such as group signature,ring signature,anonymous certificate,timestamp,outsourced data verification and so on.In recent years,cryptographic accumulators have been applied to the blockchain to solve the problem of high storage overhead.This paper first classifies,analyzes and summarizes the existing scheme from the aspects of the construction scheme and function application of the cryptographic accumulators,then introduces the main application scenarios of the cryptographic accumulators,and finally points out some problems faced by the existing scheme,as well as the future development trend and research direction.
With the rapid development of E-commerce,the importance of information security is increasing.Cryptography can ensure the safety,confidentiality,integrity and no tampering of data in the process of communication.Digital signature algorithms such as Elliptic Curve Digital Signature Algorithm (ECDSA) provide key technologies for secure e-commerce.But,the de-sign architecture of the ECDSA adopts different algorithms for multi-scalar multiplication and algorithms for single-scalar multiplication to separately execute operation,which will increase the computational complexity generally.The algorithm uses the fetching-mode method to construct the JDBC,fetching-mode operation for the part that is indivisible by the base,and at the same time,and pre-computes the obtained remainder.Compared with the greedy method used in the existing JDBC,the length of the produced base chain is reduced,and the computational complexity of the multi-scalar multiplication method is significantly reduced.Experimental results indicate that the algorithm for multi-scalar multiplication of low complexity reduces complexities by 9.84%~30.75% and by 3.88%~26.81% in terms of multi-scalar multiplication and single-scalar multiplication under the curve-P256 curve.In addition,this algorithm also reduces a complexity of 16.65% in joint processing,and thus it is estimated that the counting point of this algorithm is reduced by 25.00% when compared with the wNAF and JDBC.The running speed increases at least by 14.80% compared with those of the current algorithms by building a model through Python.
Complete permutation polynomials (CPPs) over finite fields have important applications in cryptography,coding theory,and combinatorial design theory.The block cipher algorithm SMS4 published in China in 2006 is designed based on CPPs.Recently,CPPs have been used in the constructions of cryptographic functions.Thus,the construction of CPPs over finite fields has become a hot research topic in cryptography.CPPs with few terms,especially monomial CPPs over finite fields,attract people's attention due to their simple algebraic form and easy realization.In this paper,a detailed survey of the constructions of monomial CPPs is presented.Then we give a class of monomial CPPs over finite fields with an odd characteristic by using a powerful criterion for permutation polynomials.Our construction enriches the results of monomial CPPs.In addition,we also calculate the inverses of these bijective monomials.
In the white box attack context,the attacker has full access to the cryptographic system.In order to ensure the key security in the white box attack context,the concept of white-box cryptography is proposed.In 2016,BOS et al.proposed the differential computation analysis (DCA) by introducing the idea of side channel analysis into white-box cryptography for the first time,creating a new path of white box cryptography analysis.DCA takes the software execution trace in the running process of the white-box cryptography program as the analytical object,and uses the statistical analysis method to extract the key.Whether to master the design details of the white-box cryptography or not has little impact on the analysis.The white-box SM4 is the cryptographic implementation of the commercial cryptographic standard algorithm SM4 under the white-box security model.In order to evaluate the security of the white-box SM4 efficiently,a side channel analytical method is proposed for white-box SM4 implementation based on the research on the DCA,called Intermediate-values Mean Difference Analysis (IVMDA).IVMDA directly uses the intermediate value in the process of encryption for analysis,and uses linear combination to counteract the confusion of the white-box SM4.With the participation of at least 60 random plaintexts,the first round key can be completely extracted in about 8 minutes.Compared with the existing analytical methods,this method has the characteristics of convenient deployment,suitability for practical application environment and high analytical efficiency.
Based on lightweight stream cipher Sprout,small state stream cipher such as Fruit v2,Fruit-80,Fruit-128 and Fruit-F have been proposed since 2016.The difference between Fruit and Sprout is that the round key that participates in the internal state update in Fruit does not involve the internal state of NFSR and LFSR,which makes it more difficult to recover the key of Fruit than Sprout.In this paper,based on Maitra's differential fault attack on Sprout and Banik's differential fault attack on Grain,we will describe a differential fault attack(DFA) on Fruit v2 and Fruit-80 under the most relaxed of assumption.We assume that the attacker can inject multiple,time-synchronized,single bit-flipping faults in the same albeit random register location.e first accurately identify the location of the fault injection,and then according to the affine property of the output function,we formulate a sufficient number of linear equations to recover the whole internal state of the cipher.The results show that the time complexity required to determine the internal state of Fruit v2 and Fruit-80 is 216.3 (LFSR) and 26.3 (NFSR).In the part of key recovery,with the help of cryptomanisat-2.9.5 SAT solver,all the equations can be solved in about 10 minutes.According to the statistics,the number of fault needed to attack is 27.3.The complexity of identifying the correct fault location is 26.3 (Fruit v2) and 27.3 (Fruit-80),respectively.
Variable-rate low-density parity-check (LDPC) codes are one class of codes with various code rates and play an important role in most communication systems.Two representatives of such codes are multi-rate LDPC (MR-LDPC) codes with a constant codeword length and rate-compatible LDPC codes with a constant information length.Combining algebraic and superposition construction methods,this paper studies the design and construction of the Raptor-like multi-rate quasi-cyclic LDPC codes by progressively adjusting the lifting sizes for different code rates.Based on the proposed method,with the decrease in code rate,the sizes of the base matrix and exponent matrix of the constructed codes become large while the lifting sizes are reduced.Besides,to achieve a constant codeword length and various information lengths,both shortening of information bits and puncturing of parity bits are considered.Resulting codes simultaneously own quasi-cyclic and Raptor-like structures,so that the corresponding encoder/decoder can be easily implemented by hardware and the encoding can also be done directly based on the parity-check matrix.Moreover,exponent matrices of the constructed codes have a specific algebraic structure,and thus the storage complexity is very low.Numerical results show that,compared to some standard codes,e.g.,WiMAX LDPC codes,the constructed codes can obtain a better overall performance,which can provide a promising scheme for the coding method fusion design of the future ground network and other communication systems.
Crowdsensing relies on the mobility of a large number of users and the sensing ability of intelligent devices to complete data collection,which has become an effective way of sensing data collection.In the existing crowdsensing network,the cloud platform is responsible for the whole process of task distribution and data collection,so it is difficult to effectively process a large amount of real-time data and the sensing cost is high.Different participants have different interests in tasks.Ignoring preference factors will lead to a low efficiency and poor satisfaction of selected participants.Aiming at the problems existing in the above crowdsensing network,a preference aware participant selection strategy under the edge-cloud collaborative architecture is proposed.The participant selection process is performed by the cloud platform and edge nodes in collaboration.The cloud platform distributes tasks to edge nodes based on different locations of tasks,and collects data from edge nodes.The edge node is responsible for the participant selection process,quantifying the user's preference for the task by evaluating the time matching degree,distance matching degree,task type and reward,and quantifying the user's preference for the task by evaluating the user's reputation and sensing cost.Based on the bilateral preference and stable matching theory,the participant selection problem is modeled as a many-to-one stable matching problem between users and tasks,with the stable matching solved to maximize the participant preference.The results show that the participants selected by this strategy have high satisfaction and that the data quality collected by the platform is good.
Mobile edge computing can reduce the task computing delay and energy consumption of mobile terminals by sinking computing resources and storage resources to the edge of the mobile network,so as to effectively meet the requirements of high return bandwidth and low delay required by the rapid development of the mobile Internet and Internet of things.As a major advantage of mobile edge computing,the computing offload improves the mobile service capability by migrating heavy computing tasks to the edge server.In this paper,a task offloading solutions to minimize system response delay and mobile terminal energy consumption is proposed for mobile terminal applications with low latency and low energy consumption in mobile edge computing scenarios.First,based on the comprehensive consideration of the delay and energy consumption of mobile terminal execution tasks,the task slicing model,time delay model,energy consumption model and target optimization model are constructed;second,an improved immune optimization algorithm and a task offloading solution based on immune optimization are proposed;finally,the proposed solutions are compared with the LOCAL Execution solutions and the offloading solutions based on the genetic algorithm.Simulation results show that the proposed scheme is better than the literature scheme in terms of the comprehensive cost of delay and energy consumption,and can meet the unloading requirements of mobile terminal applications with low delay and low energy consumption.
To solve the problem of the increased service request blocking rate and bandwidth blocking rate caused by the fragmentation of the elastic optical network spectrum,the causes of spectrum fragmentation is analyzed in detail.According to the characteristics of optical network carrying services,an elastic optical network fragmentation algorithm based on multi-criteria decision-making is proposed from the perspective of improving spectrum utilization.The algorithm uses the multi-criteria decision-making method to deal with the selectivity problems encountered in the process of defragmentation,and makes decisions by comprehensively considering various evaluation indexes,so as to defragment spectrum fragments.In the traffic routing stage,the algorithm is divided into five stages.In each stage,according to the current state of the optical network,the best decision is made to sort out the spectrum fragments.Each stage marks different types of connections with different labels,and judges them according to the weights set by the multi-criteria decision-making method.Finally,the best scheme is adopted to achieve the best defragmentation effect.Simulation verification is carried out through specific examples,with the results showing that the proposed algorithm has a lower bandwidth blocking rate (36% blocking rate under high load) and high spectrum utilization (up to 65% under high load),which can effectively improve the network request blocking rate under high network load conditions,and provide a theoretical reference for the processing of the spectrum fragmentation of the elastic optical network under actual conditions.
The evaporation duct is a kind of electromagnetic wave transmission medium which occurs randomly in the near sea level atmosphere.It is an important part of complex electromagnetic environment in the naval battle field.At present,the method for obtaining the characteristic parameters of the evaporation duct is usually to substitute a variety of marine monitoring data (MMD) into a specific calculation model.These meteorological elements usually include atmospheric temperature,humidity,wind speed,sea surface temperature,etc.In order to obtain the distribution of the evaporation duct in a large area over a long period of time,it is necessary to observe MMD continuously for a long time.Aiming at the problem of poor reconstruction performance of traditional compressed sensing methods in reconstructing time-varying MMD,an online reconstruction method for MMD based on sequential compressed sensing with low-rank regularization is proposed.The algorithm first analyzes the real MMD,revealing the low-rank of the data in the spatial structure,and then constructs the low-rank regularization term combined with the existing historical data with the data fidelity item established according to the condition that the data in the overlapping area at the front and back are equal.Finally the reconstruction optimization algorithm is solved based on the alternating direction multiplier method.Theoretically,the effectiveness of the algorithm is proved by the convergence analysis and complexity analysis,and experimental results verify that the algorithm has a higher reconstruction accuracy.
The steering vector of the array will be biased when there is an error in the signal receiving array,which will affect the performance of the parameter estimation algorithm.In order to reduce the influence of the array error on the parameter estimation results and reduce the computational complexity,a combination of intelligent algorithms and principal component analysis is used.First,in order to avoid the tedious process of error modeling,the back propagation neural network method is used to include errors and other factors in the network model.Second,it takes too long and is quite complicated for the back propagation neural network to train the near-field source parameter estimation model.In order to shorten the training time and reduce the amount of calculation,the principal component analysis method is introduced in the back propagation neural network model to reduce the dimension of the signal feature matrix.Then the reduced-dimensional signal feature matrix is used as the input feature of the back propagation neural network,and the near-field source parameters are used as the expected output for training,so as to simplify the network structure and shorten the training time.Finally,the received data containing signal information to be estimated is input into the trained network model to obtain the estimated value of the signal incident direction.This algorithm can accurately estimate the parameters of the near-field source in the presence of errors in the receiving array,and improve the estimation performance of the near-field source signal parameters under a low signal-to-noise ratio.Simulation experimental results show the effectiveness of the algorithm in this paper.
Due to the explicit and unconditionally stable finite difference time domain (US-FDTD) method has a high computational cost of solving eigenvalues of the global matrix and field iteration when there are a large number of unknown fields or unstable modes.To solve this problem,an efficient implementation scheme of US-FDTD based on the local eigenvalue solution (USL-FDTD) is given.All unstable modes in the entire system can be obtained accurately and efficiently without solving the eigenvalue problem of the global matrix by this scheme.In the implementation,first,the computational domain is divided into two parts.Region I contains all fine grids and the adjacent coarse grids.Region Ⅱ consists of the remaining coarse grids.Then the original global system matrix can be divided naturally into four local matrix blocks.These four small matrices contain the grid information on region I and region Ⅱ respectively,and the coupling relationship between region I and region Ⅱ.Since the unstable modes only exist in fine grids and the adjacent coarse grids,all unstable modes can be obtained by solving the eigenvalue problem of the local matrix corresponding to region I.Finally,the fields in region I and region Ⅱ can be calculated respectively.The fields of these two regions are associated by two coupling matrix blocks.In addition,there is no unstable mode in coupling matrix blocks.USL-FDTD not only decreases the dimension of the matrix to be solved,but also reduces the computational complexity and improves the computational efficiency.Numerical results show the accuracy and efficiency of this implementation.
This paper presents a four-phase output delay locked loop (DLL) with one adaptive bandwidth delay chain structure,which is suitable for the clock generation of the NAND Flash high-speed interface circuit meeting the ONFI 4.2 international protocol standard.In order to solve the problem of the limited delay time of the traditional delay chain in a wide frequency range,a configurable delay chain circuit structure is proposed,which can select the appropriated delay units in different frequency bands so that the operating frequency range of DLL is extended and the lock accuracy is maintained.In addition,an adaptive control circuit based on the frequency detector is proposed,which can track the input clock frequency,automatically configure the delay chain,and realize the adaptive bandwidth of the DLL.In the SMIC 28nm CMOS process,the DLL circuit is designed.Simulation results show that the locking range of the DLL is [22 MHz,1.6 GHz] with the maximum locking accuracy being 17 ps at the 25℃/0.9 V power supply and typical process corner.
Hybrid precoding of millimeter wave Multi-Input Multi-Output (MIMO) is an important method to reduce hardware complexity and energy consumption.In order to reduce the complexity of optimization processing and improve spectral efficiency,a hybrid precoding fast optimization algorithm based on deep learning is proposed.The difference in signal-to-noise ratio between subchannels may lead to a poor bit error rate performance.The hybrid precoder is selected by geometric mean decomposition (GMD) of block diagonalization and training based on the deep neural network (DNN).The optimal selection of the precoder is regarded as the mapping relationship in the DNN to optimize the hybrid precoding process of the large-scale MIMO.The optimization problem of spectral efficiency is approximately reduced to the minimization of the Euclidean distance between all digital precoders and hybrid precoders,and the throughput is improved by using a limited number of RF links.Performance analysis and simulation results show that due to the improved gradient algorithm and single cycle iterative structure,the DNN based method can minimize the bit error rate (BER) of the millimeter wave MIMO and improve the spectral efficiency,while significantly reducing the required computational complexity.When the spectral efficiency is 50bps/Hz,the SNR can be saved by 3dB.If different schemes achieve the same bit error rate,the SNR can be saved by more than 5dB and have better robustness.
The rapid development of artificial intelligence has made image processing technology widely used in the new generation of intelligent transportation systems.The issue of insufficient estimation of the transmission map when the existing image dehazed algorithms are applied to Intelligent Transportation Systems leads to the color shift,artifacts,and low contrast in the sudden depth of the field area for the restored images and seriously affects the performance of the outdoor acquisition system.Therefore,this paper proposes an adaptive transmittance defogging algorithm based on a nonlinear transformation.Through the use of logarithmic transformation and adaptive parameters,the intensity value of the high gray area in a dark channel is compressed that can obtain the dark channel of the original fog-free image.And then the initial transmittance is estimated.According to the difference between pixel brightness and saturation,an adjustment factor is introduced to compensate the transmittance of the sky area.After that,by combining with guided filtering,the compensated transmittance is smoothed to obtain the adaptive optimized transmittance.Then,on the basis of the atmospheric scattering model,the dehazed results are obtained.Simulation results show that the algorithm has a clear and natural dehazed effect on the sky and in the sudden depth of field area,with rich texture details,no obvious artifact and color shift,and moderate brightness.We conduct extensive experiments to report quantitative results for comparison,such as average gradient,signal-to-noise ratio,structural similarity,and information entropy.The parameters are better than those of other linear algorithms,and each index is improved by about 6.4% on average,which can effectively alleviate the halo and distortion of the dehazed image in the sudden depth of the field area.
The inpainting algorithm for the generative adversarial networks often has the errors when filling arbitrary masked areas,because the convolution operation treats all input pixels as valid pixels.In order to solve this problem,an image inpainting algorithm based on gated convolution is proposed,which uses gated convolution to replace the traditional convolution in the residual block of the network to effectively learn the relationship between the known areas and the masked areas.The algorithm uses a two-stage generation of an adversarial inpainting network with edge repair and texture repair.First,we use the edge detection algorithm to detect the structure of the known area in the damaged image.Next,we combine the edge in the mask area with the color and texture information on the known area to repair the structure.Then,the complete structure is combined with the damaged image and sent to the texture repair network for texture repair.Finally,the complete image is output.In the process of network training,the Spectral Normalized Markovian Discriminator is used to improve the problem of slow weight change in the iterative process,thereby speeding up the convergence speed and improving the accuracy of the model.Experimental results on the Places2 dataset show that the proposed algorithm is different in peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) compared with the previous two-stage inpainting algorithm when repairing images with different shapes and sizes in damaged areas.The peak signal-to-noise ratio and structural similarity are improved by 3.8% and 3% respectively,and the subjective visual effect is significantly improved.
Aiming at the problems of incomplete feature descriptors information and unsatisfactory mapping matrix optimization when constructing the shape correspondence between non-rigid deformation 3D shapes,a novel approach is presented using the Unsupervised Siamese Deep Functional Maps Network(USDFMN) to calculate the shape correspondence.First,the source and target shapes are input to the USDFMN to learn the original 3D geometric traits,which are respectively projected to the Laplacian-Beltrami bases to get the corresponding spectral feature descriptors.Second,the spectral feature descriptors are input in the functional mapping layer to calculate the more robust correspondence where an optimal functional matrix is obtained.Third,an unsupervised learning model is employed to calculate the chamfer distance metric for designing the unsupervised loss function,which estimates the similarity between shapes and evaluates the final calculated correspondence.Finally,the function mapping matrices are restored to point-to-point correspondences using the ZoomOut algorithm.Qualitative and quantitative experimental results show that the proposed algorithm for the shape correspondence of the SURREAL and TOSCA datasets contributes to a uniform visualization in correspondence distributions and a reduction in the geodesic errors.It can not only reduce the time complexity but also improve the accuracy of the shape correspondence calculation to a certain extent.Moreover,the ability of the USDFMN to be generalized,as well as its scalability,is greatly enhanced on different datasets.
The complexity of remote sensing images brings a great challenge for building extraction research.The introduction of deep learning improves the accuracy of building extraction from remote sensing images,but there are still some problems such as blurred boundaries,missing targets and incomplete extraction areas.To address these issues,this paper proposes a boundary-aware network for building extraction from remote sensing images,including the feature fusion network,feature enhancement network and feature refinement network.First,the feature fusion network uses the encoding-decoding structure to extract different scale features,and designs the interactive aggregation module to fuse different scale features.Then,the feature enhancement network enhances the learning of missed targets through subtraction and cascade operation to obtain more comprehensive features.Finally,the feature refinement network further refines the output of the feature enhancement network by using the encoding-decoding structure to obtain rich building boundary features.In addition,in order to make the network more stable and effective,this paper combines the binary cross-entropy loss and the structure similarity loss,and supervises the training and learning of the model on both pixel and image structure levels.Through the test on the dataset WHU,in terms of objective metrics,the IoU and Precision of this network are improved compared with other classical algorithms,reaching 96.0% and 97.9% respectively.At the same time,in terms of subjective vision,the extracted building boundary is clearer and the region is more complete.
Aiming at the contradiction between achievement of an enhanced detection performance by increasing the correlation length of the traditional time-domain direct correlation detection algorithm and its associated hardware implementation feasibility,a joint segment correlation and selective RAKE (JSC-SRAKE) preamble detection algorithm is proposed.Specifically,this algorithm reduces the complexity of hardware implementation by utilizing the concept of segment correlation,and employs RAKE receiving technology to collect the multipath signals to increase the detection signal energy.MATLAB simulation results show that both the probabilities of the missed detection and the false alarm of our proposal are less than 10-4 when the SNR equals -17 dB.At the same time,the proposed algorithm increases the optional range of the optimal decision threshold after introducing the selective rake receiving algorithm,thereby reducing the impact of improper selection of the decision threshold on the detection performance.Compared with direct correlation detection and segment correlation detection,this algorithm has an improved gain in a low SNR region.Moreover,the resources consumed by the proposed algorithm are less than those by the existing algorithms.Although the algorithm increases the complexity,the gain it brings is huge.The algorithm proposed in this paper can achieve the optimal compromise between performance and cost while satisfying the detection performance.Finally,the algorithm is implemented by FPGA in the Single Carrier Interleaved Frequency Division Multiple Access (SC-IFDMA) system,with the results being consistent with those of MATLAB,indicating that the algorithm can be applied to multi-user communication systems with a similar low SNR.
During the data transmission in wireless sensing networks (WSNs),the data accumulation in the communication link will lead to frequent data collisions,which further consumes the network energy.To address this problem,we propose an adaptive sink-routing decision algorithm for minimum-energy consumption (ASD-MC).First,when the data fusion degree is between the maximum and minimum value,the aggregation gain is exploited to calculate the proportional relationship between the data fusion degree and node distance-related parameters.Then,we discuss the correlation among different nodes with three distance correlation coefficients.When multiple nodes do not conduct data fusion,the condition for the existence of data fusion degree is proved for the next-hop node.In addition,according to the functional relationship of data compression energy ratio,we consider the data compression and decompression process on the link both at the source node and at the sink node.Then the procedure for the calculation of the network energy consumption is provided.Based on the above analysis,we employ the energy conversion model to derive the necessary condition for the Euclidean distance between any two nodes.Furthermore,the procedure for the implementation of our proposed algorithm is also presented.Finally,simulation results show that,compared with existing algorithms,our proposed algorithm could reduce the network energy consumption and the average network delay by 10.29% and 12.57%,respectively,which verifies the effectiveness and validity of the ASD-MC algorithm.
Since the Doppler frequency offset and Doppler rate-of-change caused by the accelerated motion of the moving targets are hard to tract and eliminate in high-dynamic environment,which seriously affects the performance of the receiver,we propose a carrier synchronization method for joint pilot and Viterbi decoding per-survivor-processing (PSP).First,an open-loop acquisition is performed based on the minimum mean square error (MMSE) principle,and the frequency offset is modeled in the form of the Taylor series expansion.Then,a known sequence is used to estimate the Doppler frequency offset and Doppler rate-of-change,thereby limiting the frequency offset and rate-of-change to a small range.Meanwhile,to overcome the performance degradation caused by the accumulation of long-term phase errors,the closed-loop tracking is implemented by the PSP technology,and the soft information determined by Viterbi symbol-by-symbol is input to the third-order phase-locked loop (PLL).Moreover,the frequency synthesizer is adjusted according to the error of the phase detector.Then this action is iteratively performed to minimize the error of the phase detector and realize the carrier synchronization.Finally,coherent demodulation reception is achieved.Simulation results show that when the normalized frequency offset is less than 0.1 and the normalized rate-of-change is less than 10-3,the proposed method can track the carrier accurately.Furthermore,under the bit-to-error (BER) constraint of 10-5,there is only 0.8dB difference in S/N ratio between the proposed decoding synchronization cascade tracking algorithm and the ideal carrier synchronization without the Doppler frequency offset and rate-of-change.In addition,the propose algorithm is significantly superior to the traditional phase-locked loop and MMSE algorithm.
An efficient technique for realizing dual-band multi-polarized waveguide antenna array featuring in shared-aperture,low-profile,light-weight and high radiation efficiency is proposed.A rectangular cavity-backed slotted antenna is adopted to realize L-band vertical polarization,while the C-band horizontal and vertical polarization are realized by utilizing the ridged waveguide slotted antenna.The L-band antenna is located between two C-band vertical-polarized slotted waveguides and below a C-band horizontal-polarized waveguide.To achieve a more miniaturized volume of the shared-aperture antenna array,an efficient feeding design of a metal bridge is introduced into the L-band rectangular waveguide.In order to confirm the feasibility of the proposed design,an array prototype with 8×16 elements operating in the C-band and 2×2 elements operating in the L-band is fabricated and measured.Experimental results demonstrate that the proposed antenna array has a satisfactory impedance bandwidth of 12% in the L-band and that of 5.5% in the C-band with VSWR<2,and the antenna radiation efficiency of above 85% for both frequency bands.
Driven by the vision of 5G communications,an efficient solution which applies the coded caching technology to obtain data quickly in fog networks is proposed.First,this paper models the cache architecture based on the fog computing networks as the two-hop network,and proposes a decentralized online coded caching scheme based on file splitting and MDS encoding for the two-hop network.This scheme ensures that the cache content of relays and users is consistent with the files in the server by updating the server files and updating the cache content of relays and users,so as to maintain the validity of the cache content.We then analyze the tradeoff between the cache memory and the traffic load for two-hop networks where each relay and user has limited cache memories.Simulation results show that the proposed schemes have a low transmission load and can relieve network congestion effectively.
To conceal the existence of secret information,the wireless covert channel established in modulation of the physical layer converts secret information into artificial noise for transmission.In multiple-input multiple-output (MIMO) communication scenarios,due to the openness of the transmission medium,the detector can use the correlation of each antenna signal to discover the covert channel.To solve this problem,this paper proposes a MIMO wireless covert channel based on precoding.Assuming that both the sender and detector can obtain the MIMO channel state information (CSI),the sender can use this CSI to pre-encode the generated artificial noise to remove correlation of the multiple signals received by the detector.The receiver can generate the precoding matrix through the CSI transmitted through the public channel,and then extract secret information.Simulation results show that compared with the existing method,the proposed MIMO wireless covert channel removes the correlation of the multiple signals received by the detector,and effectively improves the undetectability,and that the reliability has been improved to a certain extent.
For high-speed wireless communication,the Internet,new-generation radar system and real-time signal processing system,microwave measurement systems with a large instantaneous bandwidth (greater than 10GHz) and wide working frequency range (several MHz to several hundred GHz) are required.To meet these demands,a simple and novel optical approach to implementing the broadband measurement approach is proposed.In the experiment,the Doppler frequency shifts are estimated with clear direction discrimination and high resolution with the max error of 8 Hz.Then,the approach is applied for phase detection,and the phase shifts are successfully measured and estimated for microwave signals with the operating frequency ranging from 10 to 40 GHz.The max error of phase measurement is calculated to be less than 7 degrees.The approach is simple,easy to implement,multifunctional and tunable,so that it can provide a more competitive approach to realizing the measurement of the microwave signals for future wideband electronics applications than electronic solutions.
Aiming at the tracking ambiguity problem caused by the multi-peak characteristic of the Binary Offset Carrier and its derivative signals autocorrelation function,this paper proposes an unambiguous tracking algorithm based on the characteristics of the signal correlation function.First,the general expression for the cross-correlation function for the BOC reference and derivative modulation signal is obtained by the shape code vector constructed in this paper.The pseudo code of the received signal is used as a local reference signal,and a special local code waveform is designed to perform correlation operations with the corresponding received signal to obtain a signal correlation sub-function.Then,the sub-functions are recombined using the reconstruction rules proposed in this paper,and the edge peaks are completely eliminated and a new correlation function with a single peak is retained.The code tracking loop structure of the improved tracking algorithm,compared with the traditional code tracking loop structure,reduces the filter circuit,the structure is simplified,and the hardware implementation complexity is also reduced.Simulation results show that the secondary peak of the correlation function is eliminated while retaining the sharpness of the main peak.By comparing with the traditional tracking algorithm,it can be seen that the code tracking accuracy of this algorithm is higher under the premise of the same carrier-to-noise ratio.The phase discrimination function does not have any false lock points except the zero crossing point,and will not cause a false lock problem.Compared with the traditional tracking ambiguity elimination algorithm,this method can effectively apply BOCs(m,n) and CBOC(6,1,1/11) signals without ambiguity tracking.
Focusing on the shortcomings of low resource utilization rate,high equipment energy consumption and deterioration of user service quality in the mobile edge computing enhanced heterogeneous cloud radio access network at present,an energy consumption aware communication and computing resource allocation mechanism is proposed from the perspective of spectrum resources and computing resources.First,taking the network throughput as the revenue and energy consumption as the cost expenditure,a profit model framework from the perspective of service providers is established.In order to avoid the waste or overload of edge server resources caused by uneven resource allocation,the network throughput is first improved by analyzing various service requests coming from users and reasonably allocating spectrum resources by using the sparse matrix algorithm.For computing resources,a heuristic algorithm is designed to determine user association and user computing resource demand,so that each edge server can be fully utilized.Based on the results of resource utilization and considering the capacity constraints of the optical fiber forward link,the mobile edge computing server can be dynamically deployed at the macro base station or remote radio heads to reduce the equipment overhead.Simulation results for different parameter indexes and service requests at different times of a day show that the proposed mechanism can effectively increase network throughput,reduce network energy consumption and decrease the blocking probability of the optical fiber forward link,so that this mechanism is apparently superior to other algorithms.
As a new network architecture,the space air terrestrial integrated network has the advantages of wide network coverage and ubiquitous seamless access ability,but it also faces the contradiction between increasing users’ demands and limited network service resources.Introducing edge computing into space air terrestrial integrated networks can greatly improve the system’s business processing capability.Meanwhile,first,in order to improve the resources utilization of Space Air Terrestrial aided Mobile-access Edge Cloud (SAT-MEC) networks,and provide users with diversified and high-quality network services,we connect a group of virtual network functions according to a certain business logic,which forms a dynamic and reconfigurable service function chain.Considering the high dynamic and heterogeneous characteristics of the SAT-MEC network,the efficient scheduling method for its dynamic service function chain is studied,and the system model of the SAT-MEC network is designed.On this basis,the objective function of end-to-end delay optimization constrained by network resources and service requests is constructed.Second,combining the advantages of efficient parallel computing of Quantum Machine Learning,the path selection problem of the service function chain is modeled as a Hidden Markov Model based on Open Quantum Random Walk,with the model solved by the Quantum Backtrack Decoding method.Compared with the traditional precise solution and heuristic methods,simulation results show that the proposed method can improve the success rate of service request and reduce the end-to-end average delay under the condition of a high network traffic load.
The node localization problem in large scale wireless sensor networks can be formulated into a highly nonlinear nonconvex optimization problem which is hard to solve directly in large scale sensor networks.This paper proposes a new distributed localization algorithm to solve this problem.First,the global undirected graph composed of the large scale wireless sensor network is decomposed into a series of partially overlapping subgraphs,and then the global optimization problem is decomposed into a series of small scale subproblems for iterative solutions.The optimization problem in each subgraph can be solved iteratively independently.The new distributed localization algorithm for sensor nodes consists of two steps in each iteration,First,the Barzilai-Borwein gradient method is used to estimate the location of the node in the divided partially overlapping subgraph.The gradient method has a low computational cost and greatly speeds up the convergence.Second,the same sensor nodes in different partially overlapping subgraphs are fused and averaged.Theoretical analysis and simulation results show that compared with the existing methods,the proposed new distributed localization algorithm has a higher scalability and localization accuracy in large scale wireless sensor networks,and can be used for localization in large scale sensor networks.
Considering the application of wireless information and energy transmission in the underlying cognitive non-orthogonal multiple access network of vehicles,a two-stage relay selection scheme for NOMA networks is proposed according to the statistical characteristics of NOMA networks when the secondary network is interfered by the primary network.In the first slot,the secondary network source node broadcasts superimposed signals to all relays using a fixed power allocation scheme,which is determined by the statistical characteristics of the channel quality of the second hop link.In the second slot,by selecting the optimal relay in two steps based on the signal received at the relay,the selected relay uses the power splitting scheme to perform the SWIPT,and the power collected is used only to forward the decoded signal with the energy consumption of coding and decoding not considered.Two secondary users use successive interference cancellation technology to decode the received superposed signals.At the same time the secondary users are interfered by the primary network signal.Under the limit of the interference temperature constraint,the approximate expression for the interruption probability of each secondary user is derived,and the correctness of the numerical analysis is verified.By analyzing the influence of system parameters on the interrupt probability of secondary users,it is proved that the TSRS scheme is superior to the existing system in improving the interrupt performance.
Aiming at the problem of the loss of effective features when using Mel domain features for speech enhancement,this paper proposes a method to extract the Gammatone domain features of noisy speech using a power function that is more in line with human ear compressive perception,and deep-mix it with Mel domain features for speech enhancement.In order to improve the limitation of the Mel domain filter losing effective features at high frequencies.At the same time,in order to capture the connection between the transient information on the speech and the speech information on the adjacent frames,the differential derivative of the mixed feature is obtained,and the mixed feature is obtained by fusing it with the initial feature.Second,since traditional time-frequency masking cannot be automatically adjusted according to the difference in the signal-to-noise ratio,the intelligibility of an enhanced speech is affected.In order to improve the speech quality while improving the speech intelligibility,a soft mask that can be adjusted adaptively according to the signal-to-noise ratio information is proposed,and the phase difference information of the voice is incorporated.Finally,experiments are conducted on multiple speeches under different noise backgrounds.Experimental results show that when using mixed features and self-adaptive soft masks for speech enhancement,the subjective speech quality and short-term objective intelligibility of the enhanced speech can be improved,which verifies the effectiveness of the proposed algorithm in this paper.
Currently,many researches on invulnerability focus on statistical analysis of the network structure which is not suitable for the C4ISR network with specific functions.In view of this problem,this paper proposes “information flow intermediate number centrality distribution entropy” to measure the invulnerability of the C4ISR network.It goes from the perspective of the function of information flow and uses the theory of “intermediate number centrality of complex networks” and “information entropy”.The shortest information path from reconnaissance node to attack node is regarded as an operational information chain,and the “information flow intermediate number centrality ” is proposed to measure the number of operational information chains passing through a node in the C4ISR network.The uniformity of information flow intermediate number centrality is measured by “information flow intermediate number centrality distribution entropy”,and the effectiveness of this index is analyzed which can reflect the invulnerability of the C4ISR network function.Based on a representative C4ISR network structure,the invulnerability of the network after random attack,degree attack,intermediate number centrality attack and information flow attack is simulated,with the application of information flow intermediate number centrality distribution entropy in improving the invulnerability of C4ISR network functions also analyzed.Simulation results show that this indicator is more sensitive and accurate than structural invulnerability indicators such as average network efficiency of complex networks,natural connectivity,degree distribution entropy,intermediate number centrality distribution entropy,and so on and that it can find the critical point of C4ISR network function failure,and can provide guidance for improving the functional invulnerability of the C4ISR network.
The Network-on-Chip Router is the key component of multi/many-core processors.In this paper,firstly the schematic architectures of the synchronous First-Input-First-Output (FIFO) buffer and asynchronous FIFO buffer are reviewed with their latencies addressed.Then the architectures of the Network-on-Chip (NoC) and its router are introduced.With the previous foundations,an optimized clock tree distribution scheme,as well as the NoC router implementation under this clock tree distribution scheme,are proposed.With this novel clock tree optimization,the latency of the NoC is greatly reduced.In addition,in order to decrease the area of the register based FIFO,the latch based FIFO is proposed.Single tick latch writing is ensured.And sharing multiple FIFOs is proposed.The proposed techniques are especially useful for embedded low-power many-core processors.