Aiming at the shortcomings of a low transmission rate in traditional Differential Chaos Shift Keying (DCSK) and a high bite error rate when transmitting multi-user information, an improved orthogonal multi-user noise reduction differential chaotic keying (OMU-NRDCSK) is proposed. The chaotic sequence generated at the sending end of the system is used as an information bearing signal after being duplicated, the multi-users’ information modulated by the Walsh codes are transmitted respectively through different delays. At the receiver, after passing through the moving average filter, the received signals are correlated with themselves to demodulate the initial information signals. The bit error rate formula for the system under the Rayleigh fading channel is derived and Monte Carlo simulation is carried out. Analytical and simulation results show that the OMU-NRDCSK system reduces the noise term variance by averaging the received signals and improves the bit error performance, with its transmission rate improved compared to the DCSK system, which effectively improves the bit error performance of the multi-user DCSK system.
To improve the robustness of a position system and reduce the localization error, this paper proposes a fingerprint positioning method based on the recursive Bayesian. To solve the blindness and unreliability of the location fingerprint data in an offline phase, the fingerprint database based on the sample variance is developed to measure the confidence of sampling values and reduce the impact of environmental factors, improving the reliability for online localization. The proposed method provides the target position at the current moment by utilizing the Markov model that is established by the constraint relationship between moments in the source movement, which avoids the jump problem of the position estimation and poor robustness and improves the localization accuracy. Extensive experimental results demonstrate that the average localization error norm of the proposed algorithm is no more than 0.927m, indicating significantly lower errors than other traditional schemes (often by more than 30 percent).
In order to improve the accuracy of facial expression recognition and face classification in a local linear embedding network, an improved face image classification method based on the local linear embedding network is proposed. Based on the local linear embedding algorithm, the intra-class to inter-class discrimination matrix is used as the input of the network. At the same time, the reconstruction of the face image set is used to improve the local linear embedding algorithm, and the improvement of the local linear embedding algorithm based on clustering is embedded into the construction process of the convolution kernel, thus increasing the discrimination degree of different types of faces. By the Extended Yale B data set and Olivetti Research Laboratory data set on the contrast experiment, the experiment is analyzed in the treatment of facial expressions and the effects of various methods in the face recognition task, the results show that, compared with the other methods, the recognition rate of the proposed improved locally linear embedding network face image classification method is raised by 11%~26%.
This paper presents some solutions to the problems in the precise time interval measuring instrument such as the contradiction between high resolution and wide measurement range, the high temperature sensitivity, the low reliability and the big volume and power consumption. The reference signal is used to generate calibration signals to automatically calibrate the analog circuits which are greatly affected by the temperature. The calibration data are used to amend the conversion coefficient between voltage and time interval and the temperature sensitivity of the instrument is greatly reduced. To avoid the gross error in the measurement result caused by the false trigger of the counter, the double counter synchronous measurement technique is adopted and the logic algorithm is used for analyzing and correcting the measured results. Electronic counting method and time-to-voltage converter method are combined together to meet the requirements of the measurement range and resolution of the instrument. The circuit board’s area of the prototype is only 10cm2. The prototype’s effective resolution is above 10ps and the standard deviation of multiple measurement results is below 15ps, the measurement range is wider than 20, 000 seconds, and its measurement results are highly reliable.
In the complex battlefield environment, the uncertainty of target information causes the target recognition difficulty and misjudgment, which brings about the problem of a low accuracy of target recognition results. This paper proposes a data fusion method for multi-sensor target recognition based on the discrete factor, which can give rise to the output data of the multi-sensor at the multi-period and multi-regions detection, and bring about the discrete factor of obtaining target characteristic corresponding sensors. It can provide the current weight of multi-sensor target recognition according to the discrete factor, establish the relative consistency and the relative weighted consistency function of multi-sensor target recognition, combine the current weight of multi-sensor target recognition and the related consistency function, and construct the data fusion result support calculation model of multi-sensor target recognition. Experimental results show that when the environment is complex, the data fusion method for multi-sensor target recognition based on the discrete factor has more accurate target recognition results, which conforms to the reality in comparison with the data fusion method for target recognition with a given sensor weight in advance. It is shown that the method proposed in this paper is more reliable and has a certain anti-interference ability.
To alleviate the state-explosion problem of model checking, a novel distributed model checking method based on the propositional projection temporal logic (PPTL). First, the property to be verified in the PPTL formula is transformed into an automaton with the technique of Labeled Normal Form Graph, which in turn is partitioned into multiple subautomata according to the strongly connected components. Then, each subautomaton and the system model in the Hierarchical Syntax Chart are delivered to the members of the verification server cluster, and model checking of the system is implemented in parallel with the on-the-fly technique on multiple computers. Experimental results indicate that, compared with the standalone model checking approach, the proposed method can not only significantly reduce the time consumption but also verify more complex systems.
In order to find a way to transfer back the state of a nonlinear random vibration system which is far away from the trivial equilibrium point, a model predictive path integral control method is introduced. Under certain conditions, the Hamilton-Jacobi-Bellman equation for optimal control of nonlinear random vibration can be linearized by exponential transformation. Based on the Feynman-Kac theorem, the path integral method can be used to solve the optimal control force. By introducing the idea of model predictive control, the control force can be updated in real time according to the actual state of the system. Numerical simulation is carried out for the control of two typical systems, van der Pol equation and Duffing equation. The results show that the state of the system can be quickly transferred to the vicinity of the ordinary equilibrium point, while the control force and real-time cost decreases monotonically after the initial fluctuation. Therefore, the model predictive control path integration method can be well applied to the vibration of random nonlinear systems far from the trivial equilibrium point.
The multiply-accumulator (MAC) in existing convolutional neural network(CNN) accelerators generally have some problems, such as a large area, a high power consumption and a long critical path. Aiming at these problems, this paper presents a high-performance MAC based on transmission gates for CNN accelerators. This paper proposes a new data accumulation and compression structure suitable for the MAC, which reduces the hardware overhead. Moreover, we propose a new parallel adder architecture. Compared with the Brent Kung adder, the proposed adder reduces the number of gate delay stages and improves the calculation speed without causing an increase in hardware resources. In addition, we use the advantages of the transmission gate to optimize each unit circuit of the MAC. The 16-by-8 fixed-point high performance MAC based on the methods presented in this paper has a critical path delay of 1.173ns, a layout area of 9049.41μm2, and an average power consumption of 4.153mW at 800MHz under the SMIC 130nm tt corner. Compared with the traditional MAC, the speed is increased by 37.42%, the area is reduced by 47.84%, and the power consumption is reduced by56.77% under the same conditions.
The lack of efficient security guidance is a prominent problem in the design flow of high-level synthesis. To tackle this issue, this paper proposes a security-based high-level synthesis design flow featuring the power side-channel vulnerabilities. The side-channel leakage is quantified by establishing a secure component module library, a more efficient and secure parallel scheduling mechanism is generated by optimizing the control flow, and a more secure architecture of the storage system is achieved by optimizing the data flow. The goal is to perform tradeoffs between performance and security, reducing the side-channel risks at the early stage of design and simultaneously generating more secure and efficient cryptographic cores in hardware. Furthermore, the proposed HLS design flow is verified on a field programmable gate array platform. Experimental results show that, in comparison with the traditional design flow, this method reduces the resources by 72% and the clock cycles by 70% and increases the throughput by 88%, and that it can lower the power side-channel risks within an ongoing design to the greatest extent.
In order to improve the accuracy of the foggy-image pedestrian and vehicle detection, a novel and practical Foggy-image pedestrian and vehicle detection network (FPVDNet) based on the Faster R-CNN is proposed. First, a foggy-density discriminating module (FDM) is proposed to influence the density of the foggy images. In this way, the prediction from the FDM could determine the subsequent operations for different densities of the fog (No-fog, Light fog, and Dense fog). Then, the squeeze and excitation module (SE Module) is designed to use the attention mechanism to improve the feature extraction capability of the network. Meanwhile, the method of the deformable convolution network is applied to add offsets and learn the offsets from target tasks to enhance the transformation modeling capacity of CNNs. Finally, for lack of the annotated fog image dataset, it is necessary to generate a simulated fog image training dataset through the atmospheric scattering model. The simulated foggy image inherits the annotation of the clear image and increases the information on the fog density. Experiments by the proposed FPVDNet are carried out on the 1, 500 real-fog images and 500 real-clear images, with experimental results showing that, compared with the original Faster R-CNN, the mean average detection accuracies are improved 2%~4% by using the FPVDNet.
The fundamental problem of multi-user computation offloading for Mobile Edge Computing is investigated in heterogeneous overlay networks where each user can connect and offload its computing workloads to multiple heterogeneous wireless access points in parallel. The problem of average user overhead minimization with the delay constraint is formulated to obtain the optimal strategy of workload partition and heterogeneous resource allocation. A successive convex approximation (SCA) based algorithm is finally developed, which addresses the problem of non-convex optimization by iteratively solving a sequence of separable strongly convex problems. Numerical results are presented to prove that the proposed offloading mechanism can effectively reduce the service latency and energy consumption of users compared with the conventional non-cooperative approach.
Low-dose CT has the advantages of low radiation and high efficiency, but the noise and artifacts with low-dose CT images reduce the reliability of diagnosis. In order to improve the quality of low-dose CT images, this paper attempts to enhance the visuals of reconstructed CT images in the wavelet domain, and improve the running speed by combining the multi-dilated convolution and subpixel, so that the model can be better deployed to the CT equipment. The data set of "2016 AAPM Low Dose CT image Challenge" is used to evaluate the proposed method. Experimental results show that the visuals of reconstructed CT images are better. Compared with RED-CNN, the average PSNR of the proposed method is improved by 0.1428dB (1mm) / 0.0939dB (3mm), and the running speed on the CPU and GPU is increased by more than 55% and 50%, respectively.
In order to solve the problem of privacy leakage in the sharing of genomic data, a genomic data privacy-preserving scheme based on the improved Private Set Intersection (PSI) computing protocol is presented, which leverages the Bloom Filter, Cuckoo Hash, and Random Oblivious Transfer (ROT) extension protocol to not only protect the genomic sequence information on the user when detecting a disease-causing gene but also judge whether he or she has some disease factors or not. Moreover, the correctness and security of the proposed scheme in the detection scenario of disease-causing genes are proved in the semi-honest security model. In addition, a series of experiments is conducted to verify the efficiency of the proposed scheme. The results reveal that the running time and communication overhead of the proposed scheme are much less than those of the existing PSI schemes.
A non-canonical vortex is an optical helical phase structure with the same topological charge of a canonical vortex but different phase distributions. Based on the Richards&Wolf vectorial diffraction theory, the expression for the strongly focused, linearly polarized non-canonical vortex beam is derived, and its propagation properties are studied numerically in the focal region. It is that the transverse focal shift does not only occur in the strongly focused, off-axis (or on-axis) canonical vortex beams, but also can be seen in a strongly focused, non-canonical vortex beam. The transverse focal shift in these two fields have the same form, but the factors influencing them are quite different. It is also demonstrated that because of the transverse focal shift, if both the semi-aperture angle and the phase distribution factor meet a certain requirement, the total field intensity pattern in the focus region can rotate clockwise in the propagation direction, while the intensity maxima will also rotate 180° from the negative half space to the positive half space. The result will provide a new way for controlling the field distribution in structured fields, which may be applied in optical tweezers.
In order to make full use of visual and auditory perception channels and realize efficient brain-controlled character spelling, a two-level spelling paradigm based on the region is proposed. In the first level of the paradigm, the target region is selected based on the motion-onset visual evoked potential, and the code division multiple access method is introduced to improve the selection rate. In the second level, the target character is encoded based on the hybrid motion-onset visually evoked potential and auditory P300 to make full use of the visual and auditory hybrid effect to improve the accuracy of target character selection. In order to decode the collected EEG signals effectively, a classification recognition algorithm for EEG signals combined with a deep linear discriminant analysis is proposed. Experimental results show that the average classification accuracy of the deep linear discriminant analysis algorithm in the classification recognition of two-level EEG signals is 61.7% and 74%, respectively, which is obviously higher than that of the traditional method and the two convolutional neural network methods. Therefore, the algorithm can effectively improve the decoding performance of the two-level brain computer interface induced by the audio-visual hybrid.
To tackle the problem of a low accuracy of test suite-based automatic program repair methods, this paper proposes a rule-based automatic program repair method named RuleFix. The proposed method first mines implicit programming rules in programs to locate defects, and then selects an appropriate patch according to the implicit programming rules, and lastly verifies the patch by utilizing the program synthesis tool to ensure the correctness of the repair result. Moreover, to tackle the problem that the existing rule mining algorithms cannot effectively mine low-frequency rules, a low-frequency rule mining algorithm is proposed, which can derives new rules based on the existing rules to improve the ability of rule mining. Finally, a prototype tool is implemented based on the proposed method, and then the proposed method is compared with the existing automatic program repair methods. Experimental results demonstrate that the proposed method has a significantly higher repair rate and accuracy rate than the existing GenProg and PAR methods.
In order to reduce both the power loss of the switched capacitor converter integrated in the on-chip power delivery network and the voltage noise at the load, this paper proposes an optimization method to optimize the capacitance allocation between the flying capacitors of the switched-capacitor converter (SCCs) and the decoupling capacitors at the load. By formulating and solving the inequality-constrained nonlinear programming problem, the SCCs’ flying capacitance and decoupling capacitance can be optimally allocated and the sum of power loss and voltage noise is effectively reduced. Experimental results show that the joint optimization of the SCCs’ capacitance and decoupling capacitance can reduce the sum of power loss and voltage noise by about 11%~28%. For larger power deliver networks, this method can efficiently reduce the power loss and voltage noise.
In order to solve the problem of precipitation particle classification in the case of random missing of polarization parameter data of two-polarization meteorological radar, a method based on matrix completion(MC) and decision tree support vector machine multi-classifier(DTSVMs) is proposed. First, the polarization parameter data with random miss is reconstructed according to the matrix completion algorithm, and then the training data are used to learn the DTSVMs, and finally the precipitation particle classification of the reconstructed data is realized by using the DTSVMs with good learning. By processing the measured data and analyzing the results, it is proven that this method can effectively solve the precipitation particle classification problem in the case of random missing of polarization parameter data.
Existing video smoke detection methods have a low detection accuracy in complex scenes and cannot detect smoke areas in video frames accurately. In this paper, a phased smoke detection algorithm that combines the smoke movement process and the target detection algorithm is proposed. First, an improved ViBe algorithm based on smoke color features is used to extract the continuously moving smoke in video. Then, the YOLO v3 model is used as the target detection network. The channel attention mechanism is added to the residual structure of its backbone network. Focal-loss and GIoU are utilized to improve the loss function. According to the test of the smoke image data set, the detection time of the improved network on a single picture is 38.4ms and the mAP reaches 92.13%, which is 2.19% higher than that by the original model. While extracting smoke motion, the same frame is sent to the improved YOLO v3 for smoke detection. Finally, comprehensive discrimination is made based on the smoke detection results in stages. Public smoke video test results show that the algorithm has an average detection rate of 98.88%, which proves that the algorithm has a strong adaptability, a high detection efficiency in complex scenes and a high practical application value.
Current methods focusing on 3D model recognition and segmentation have to some extent ignored the relationship between the high-level global single-point features and the low-level local geometric features of those models, resulting in poor recognition results. A multi-feature fusion approach which takes into consideration the aforementioned ignored relationship is proposed. First, a global single-point network is established to extract the global single-point features with high-level semantic recognition ability by increasing both the width of convolution kernel and the depth of the network. Second, an attentional fusion layer is constructed to learn the implicit relationship between global single-point features and local geometric features to fully explore the fine-grained geometric features that can better represent model categories. Finally, the global single-point features and fine-grained geometric features are further fused to achieve the complementation of advantages and enhance the feature richness. Experimental verification is carried out on the 3D model recognition datasets ModelNet40, ModelNet10 and segmentation datasets ShapeNet Parts, S3DIS, vKITTI, respectively, and comparison with current mainstream recognition algorithms shows that the proposed algorithm not only has higher recognition and segmentation accuracy, but also has stronger robustness.