Existing dehazing networks exhibit a limited capability in feature extraction and color bias suppression under low-light haze conditions, often leading to detail loss and color distortion. To address these issues, we propose FCformer, a Joint Feature Enhancement and Color Fusion Network for low-light image dehazing. To restore the image structure and texture, a feature enhancement backbone is designed with window-spatial and sparse-channel modules to focus on key local and global features. A color fusion branch, by incorporating color correction and fusion, improves chromatic representation. A learnable prior constraint module based on atmospheric scattering and Retinex models regularizes the output. Finally, a composite loss function, by combining reconstruction, perceptual, and color losses, guides better detail and color restoration. Experiments show that the FCformer surpasses the DehazeFormer by 0.98 dB in PSNR with a similar parameter size, and achieves a PSNR comparable to that of the ACANet while reducing parameters by 96.84%, demonstrating a superior visual performance.
As an active deceptive jamming style, the towed Radar Active Decoy(TRAD) plays a crucial role in modern electromagnetic spectrum warfare. These TRADs modulate and retransmit radar signals to simulate target echoes, inducing radars to track stronger false targets, thereby providing self-defense protection for the aircraft. To address the vulnerability of terminal guidance radars to TRAD deception, this paper proposes a countermeasure method based on spatial morphological features. Since jamming signals cannot realistically replicate the spatial morphology of targets under wideband conditions, the proposed method enables effective target discrimination and anti-jamming imaging. First, a monopulse three-dimensional(3-D) imaging algorithm is employed to reconstruct the spatial distribution of strong scatterers in the radar's forward-looking region. Second, spatial filtering is applied to the imaging results to remove noise points and cluster the strong scattering point clouds. Finally,spatial morphological features are extracted from each point cloud cluster. A discriminator is then employed to perform target identification, resulting in an anti-jamming three-dimensional image of the target. Theoretical analysis and experimental results reveal significant spatial morphological differences between the false targets produced by the TRAD and the actual targets. These findings validate the effectiveness of the proposed method in countering TRAD jamming.
The text-based screen content is widely used in scenarios such as online meetings, cloud gaming, and remote control, and it generates massive data that impose a significant pressure on storage and bandwidth. Traditional screen content transmission methods are constrained by the Shannon limit and cannot break through the transmission bottleneck. Moreover, existing image coding and transmission methods based on semantic communication mainly target natural scene images, with screen content images not sufficiently studied. In response to the above issues, a semantic coding and transmission system for text-based screen content images is constructed for the first time. Specifically, an end-to-end semantic framework is first designed to extract the semantic features of textual information in images while compressing or discarding irrelevant data. By reducing the amount of unnecessary data transmitted, efficient encoding of key semantic features is achieved. Next, a dynamic encoding strategy utilizing the channel state information (CSI) feedback is proposed, which optimizes the encoding process by real-time acquisition of wireless channel characteristics to enhance transmission robustness and efficiency. Additionally, an auxiliary loss function is designed to protect semantic features relevant to downstream tasks in the images. Finally, experimental results demonstrate that compared with the state-of-the-art transmission methods, the proposed approach achieves a higher communication efficiency under different channel conditions and bit rates.
An indoor visible light positioning (VLP) scheme based on the multi-strategy improved Jaya algorithm is proposed for the indoor 3D VLP problem of a single light-emitting diode (LED), taking into account the influence of non-line-of-sight (NLOS) links. Specifically, in this indoor 3D VLP system model, the NLOS link is considered as an interference, and the PAM-DMT-based least squares (LS) channel estimation is utilized to estimate the LOS link channel gain, and then the sum of the squares of the difference between the estimated channel gain and the channel gain computed at the test point is used as the fitness function, and the multi-strategy improved Jaya algorithm is used for the search and optimization to realize the localization. Subsequently, the 3D positioning errors of different numbers of PDs at different inclination angles and heights are simulated and compared, and the convergence and performance of the proposed multi-strategy improved Jaya algorithm are compared with those of the other four algorithms. The results show that the LS channel estimation method based on PAM-DMT can effectively reduce the localization error due to NLOS interference, and that the average localization error of the system with channel estimation is reduced by 84.64% compared with that of the system without channel estimation. When the number of PDs in the receiver is 5 and the inclination angle is 75°, the fluctuation of the positioning error with height is the lowest and the average positioning error is the smallest under the consideration of the NLOS link. Compared with the other four algorithms, the multi-strategy improved Jaya algorithm has a higher positioning accuracy as well as fewer iterations. The work in this paper will be of some reference value for the study of the indoor single-LED VLP system.
High-performance classification throughput and dynamic update of rule sets are two core requirements for packet classification algorithms. Hybrid packet classification algorithms combining hash tables and decision trees typically replace tree nodes with hash tables to compensate for the inherent shortcomings of decision trees in terms of update performance. However, the hash table instead of tree nodes introduces additional hash mappings in the packet classification and disrupts the original single query path of the tree structure, ultimately degrading the classification performance. Consequently, a hybrid packet classification algorithm based on the interval tree, named CutIntervalTree (CITree), is proposed to achieve a high classification performance while supporting fast rule updates. The algorithm avoids introducing hash tables into the tree structure and instead introduces a preprocessing unit to partition the rule set into independent and non-independent rules, and stores them in the two-level tree structure and the hash table, respectively. The categorized storage of rules fully utilizes the advantages of high-speed classification of tree structure and fast update of the hash table. In addition, the CITree stores independent rules in the root node, intermediate nodes and leaf nodes of the interval tree, so that the matching operation can be completed in any node without traversing to the leaf nodes, thus realizing effective pruning. Experimental results show that the proposed algorithm improves the classification throughput by 72.2% and the rule updating efficiency by 63% compared with the current state-of-the-art algorithm.
As a multicarrier modulation technique proposed for high-speed mobile scenarios, AFDM can completely separate each path of the doubly selective channel, achieving full time and frequency diversity gains. It is a strong candidate waveform for the physical layer of future mobile communications. However, current research achievements on symbol synchronization for AFDM systems are very limited, and traditional synchronization algorithms struggle to achieve a good performance in complex doubly selective channels. Aiming at this problem, this paper proposes a coarse-fine two-stage synchronization algorithm suitable for doubly selective channels. By relying on fine synchronization performed in the discrete affine Fourier transform(DAFT) domain to correct errors from coarse synchronization, a high synchronization accuracy is obtained under doubly selective channels, on the basis of which this paper combines the repeated multi-segment pilot structure required for coarse synchronization, and jointly designs fine synchronization with channel delay-Doppler shift estimation, thus saving pilot overhead. Finally, the channel complex gain information is obtained using the LS estimation algorithm, thereby realizing symbol synchronization and channel estimation for the AFDM system. Simulation results show that the algorithm has a high synchronization accuracy and an accurate channel estimation in time-frequency dual selection channels, and that the requirement for pilot energy in channel estimation is significantly reduced, demonstrating its good adaptability and application prospects in time-frequency dual selection channel environments.
Classical simulation of quantum algorithms plays a crucial role in evaluating the algorithm performance and verifying theoretical correctness. For high-order sparse matrices, the corresponding Hamiltonians often exhibit complex structures and characteristics, leading to excessively high complexity in quantum solving, which severely constrains simulation efficiency and accuracy. To address the challenges in simulating Hamiltonians, modular decomposition techniques and function construction methods are proposed to approximate the evolution of Hamiltonians, thereby establishing a general circuit design scheme for implementing the Harrow-Hassidim-Lloyd(HHL) algorithm on classical computers. We implement multi-scale quantum circuits with 13/14 qubits (basic scale) and 20/21 qubits (extended scale) based on the Qiskit quantum computing framework, and verify the applicability of the designed circuits by testing multiple sets of 8×8 Hermitian matrices and column vectors. Finally, we analyze the fidelity and error under different conditions for various linear systems, as well as the time and space resources they occupy. Experimental results demonstrate that as the qubit scale expands, quantum circuits incorporating these two techniques exhibit synchronous optimization characteristics with enhanced fidelity and reduced errors when solving linear systems. Compared with other methods, both techniques demonstrate superior large-scale circuit processing capabilities, providing a scalable technical route for utilizing quantum algorithms to solve high-dimensional linear systems.
Aiming at the problem of poor performance of iris segmentation for low-quality images, this paper proposes an iris segmentation algorithm based on the collaborative attention mechanism. Based on the U-Net model under the deep learning framework, this algorithm innovatively introduces a dual attention module of the regional attention mechanism and the quality-aware attention mechanism, and collaboratively improves the accuracy of iris region segmentation through the two dimensions of position perception and image quality perception. Specifically, the regional attention mechanism predicts the area where the iris ring is located and constrains the target spatial area during the feature extraction process, thereby effectively reducing background noise interference. The quality-aware attention mechanism dynamically adjusts the focus on key features in the convolutional attention module based on the image quality assessment results, thereby significantly enhancing the key feature expression ability of low-quality images. Experimental results on public datasets and self-made datasets show that this algorithm outperforms many mainstream segmentation models such as U-Net and IrisParseNet in the two core segmentation evaluation metrics of intersection and union ratio and accuracy rate. Especially under the conditions of low-quality images such as low illumination and motion blur, the improvement of segmentation effect is more significant. These improvements provide reliable technical support for the practical application of iris recognition systems in complex environments.
Social recommendation helps to improve the performance of personalized recommender systems by exploiting user social connections. However, most existing methods struggle to fully capture the complex relationships between users and items, while neglecting the issue of social inconsistency caused by irrelevant or even erroneous social ties, thus reducing the correctness of user embeddings and the accuracy of social recommendations. This paper proposes a knowledge-aware neighbor filtering mechanism for social recommendation (KFRec), aiming to resolve the aforementioned issues by integrating knowledge graphs with graph neural networks. First, this paper utilizes knowledge graph embedding techniques to vectorially represent users, items, and ratings, thereby capturing the latent relational patterns among them. Subsequently, these vectors are fed into a graph neural network to optimize the node representations of the graph neural network. To improve the model's consistency recognition capability, this paper dynamically constructs query vectors based on the user-item pairs to be evaluated, and comprehensively model the consistency scores between the query vectors and neighbor nodes using the knowledge graph. By sampling and aggregating more consistent neighbor nodes, the graph neural network model's ability to filter inconsistent neighbor nodes and node representations is enhanced. Extensive experiments on three public datasets demonstrate the superiority of our proposed KFRec over existing mainstream methods.
Aiming at the current problems that although the direct counting frequency measurement has a wide measurement range, its resolution is limited, and although the indirect phase measurement has a high resolution, it has measurement dead zones and a narrow measurement range. This paper proposes a digital linear phase comparison method with a variable least common multiple period for frequency and frequency stability measurement. This method employs an analog to digital converter as a phase detector to acquire phase information, and the digital measurement approach avoids the measurement dead zone existing in traditional phase comparison methods. By controlling the phase detection region to be within the linear range, the measurement ambiguity zone is circumvented, and the measurement resolution is improved. By analyzing the variation law of phase difference between signals with arbitrary frequencies, taking the nominal least common multiple period as the sampling interval, and combining rough frequency measurement with truncation processing, the measurement range is expanded, and the noise introduced during the normalization process is avoided. Data processing is performed on the phase information in the linear region obtained by sampling, thereby completing the direct phase comparison between signals of arbitrary frequencies and achieving high-resolution measurement of frequency and frequency stability. Experimental results show that the system noise floor is better than 1.86E-15/1000s, and that under the measurement gate of 1 second, the resolution of frequency measurement can reach 20μHz. When performing the high-resolution frequency and frequency stability measurement on frequency source signals in the range of 1MHz - 100MHz, the accuracy of the measurement results remains stable.
Multifunction radar (MFR) realizes multi-task coordination through waveform agility and beam adaptive scheduling, which brings many challenges to radar working pattern recognition. Existing recognition methods rely on the local time-domain characteristics of the pulse sequence, and it is difficult to effectively analyze the generation mechanism of different working modes. In the face of complex situations such as pulse loss and similar intra-pulse parameters, the recognition performance drops sharply. Considering the influence of multi-function radar beam scanning process on the amplitude information of pulse group sequence, a multi-function radar working pattern recognition method based on the Spatial-Temporal Graph Convolutional Network (ST-GCN) is proposed. The network model first quantifies the similarity of radiation characteristics of adjacent wave position signals by introducing a dynamic regularization module, and constructs a spatial adjacency matrix with physical interpretability. Then, the one-dimensional pulse group sequence is mapped into a two-dimensional graph structure, and the node characteristics such as pulse frequency and signal amplitude are fused to form a space-time joint representation. Finally, the convolutional kernel of the layered graph is designed with the deep space-time features extracted through the multi-layer information transmission mechanism to complete the radar working pattern recognition. Comparative experiments show that the average recognition rate of the proposed method can still reach 93.38 % under non-ideal conditions such as pulse loss, and can be better generalized and robust.
The objective of digital mural restoration is to employ information technology in reconstructing the missing or damaged portions of murals, thereby reinstating their visual coherence and authentic artistic representation. Existing deep learning methods for mural restoration often lack sufficient cross-modal semantic constraints from the text, which can lead to semantic confusion and loss of detail in the restoration results. To address this issue, we propose a text-guided multi-modal mural diffusion restoration method. First, a text encoding module based on a multi-head self-attention mechanism is designed to project mural textual descriptions into the feature space. A cross-modal interaction mechanism is further introduced to fuse textual and visual features, thereby enhancing the semantic consistency between modalities. Then, we build a diffusion-based mural restoration module. The forward diffusion process adds noise to generate Gaussian-distributed mural features, while the reverse network reconstructs missing regions. Next, a mask refinement control module is introduced. It uses features from the complementary mural mask to guide the reverse decoding process and improve the generation of texture and details, enabling accurate restoration of damaged murals. Finally, experiments on the Dunhuang mural dataset show that the proposed method outperforms comparison methods.
Aiming at the problems that the density peak clustering algorithm has a poor clustering effect on variable density datasets and that the "domino" phenomenon will occur in the sample assignment process, a density peak clustering algorithm combining cluster growth and boundary assignment strategy is proposed. The algorithm uses the local k-nearest neighbor information to calculate the sample density and relative distance, and then obtains the sample decision value. Based on the distance, density and neighbor relationship between samples, the attraction degree and growth radius are defined. Combined with the decision value, the cluster centers are selected in turn, and the cluster growth strategy is proposed. Starting from each cluster center, this strategy grows the current cluster by using the attraction degree and the growth radius to obtain the initial clustering result, on the basis of which the adjacency degree is defined by using the nearest neighbor and distance information between the assigned clusters and the unassigned samples, and the boundary assignment strategy is proposed. The assignment strategy divides each unassigned sample into the most appropriate cluster by the adjacency degree, and updates the assigned and unassigned sample sets continuously until all the samples are assigned to obtain the final clustering result. Compared with 7 algorithms on 16 synthetic datasets and 10 UCI datasets, experimental results show that the proposed algorithm is superior to the comparison algorithms in adjusted rand index, normalized mutual information and adjusted mutual information on most datasets. At the same time, the statistical test results show that the proposed algorithm and the comparison algorithm have statistically significant differences. The proposed algorithm has a better clustering effect.
In the field of array signal processing, existing improved arrays have significant performance limitations: although the improved coprime array has a sparse structure, its ability to fill holes is weak, resulting in a limited number of identifiable incoming signals. Although the improved nested array can fill most of the array holes to support the identification of more incoming signals, the densely distributed array elements cause strong coupling effects, which seriously restrict the angle estimation accuracy. To address the above contradictions, this paper proposes an Optimal Weight function Sparse Array (OWSA) based on a newly improved array configuration. By optimizing the array element layout and weight allocation, this array further improves the sparsity of array elements on the basis of ensuring a long uniform degree of freedom, thereby effectively alleviating the electromagnetic coupling effect between array elements and achieving the synergistic improvement of the ability to identify multiple incoming wave angles and the coupling suppression performance. Experimental results show that the OWSA array successfully balances the core problem that it is difficult for the uniform degree of freedom and coupling effect to be taken into account in traditional array configurations. In complex scenarios such as a low signal-to-noise ratio, a small number of snapshots, and high coupling, its direction-of-arrival angle estimation accuracy is significantly better than that of the improved coprime array and nested array, which verifies the feasibility and superiority of this new array in high-precision angle estimation in complex electromagnetic environments.
Mini programs, exemplifying the "app-in-app" paradigm, have become deeply integrated into people's work and daily lives, accessing substantial amounts of user privacy data. To prevent privacy leaks, mini program platforms monitor and regulate regular communication methods. However, mini programs can use covert communication to evade detection. Aiming at the security threat of covert communication to user privacy leakage, this paper analyzes the risk of privacy leakage of mini programs covert communication. On the basis of summarizing the covert communication model and communication conditions of mini programs, we design covert communication methods for both mini-program-to-mini-program and mini-program-to-server communications based on the mini program APIs and components. Invisible character-based source coding and forged pages are adopted to improve the covertness respectively. Experiments verify that the above covert communication methods can realize secret information transmission, and that two attack scenarios are designed to analyze the privacy leakage risk brought by the covert communication methods. Finally, corresponding mitigation measures are discussed.
One of the main challenges in image steganalysis is to maintain a high detection accuracy while simplifying the model structure, including reducing the number of trainable parameters and accelerating both training and inference. To address this issue, this paper proposes a steganalysis scheme that combines element-wise multiplication with deep orthogonal fusion. First, a multi-scale attention module is designed to enhance the noise residual features extracted by the SRM filters during the preprocessing stage. Then, a feature analysis module incorporating separable convolutions and element-wise multiplication is introduced to perform multi-scale modeling and learning of the enhanced noise features. Finally, an orthogonal feature fusion module is proposed to integrate local and global features of the noise in an orthogonal manner, compensating for the loss of fine-grained details caused by global average pooling in the analysis process. Experiments are conducted on two public datasets, BOSSBase and BOWS 2, using two typical adaptive steganographic algorithms, S-UNIWARD and WOW, at various embedding rates. The experimental results demonstrate that the proposed method achieves approximately 2.4% and 1.2% higher detection accuracies on average compared to commonly used methods for the two algorithms, while significantly reducing the number of model parameters and improving an overall efficiency and performance. In addition, ablation studies validate the effectiveness of each proposed module within the overall framework.
The Vehicular Ad Hoc Network (VANET) is a specialized form of the mobile ad hoc network that utilizes vehicles and infrastruc-ture as nodes. It uses wireless communication technology to exchange data and share information among these nodes, thereby significantly optimizing traffic efficiency, ensuring driving safety, and improving user experience. However, trans-mitting data in a plaintext during communication exposes it to various security attacks such as data forgery and theft. Existing VANET communication mechanisms prioritize data authenticity over confidentiality. While these mechanisms ensure the au-thenticity of transmitted data, they may not provide sufficient protection for its confidentiality. Furthermore, these mechanisms encounter challenges in promptly identifying and revoking access for malicious vehicles that can compromise network security. To tackle these problems, this paper condenses the core technology of the SM2 algorithm, designs an efficient sign-cryption scheme based on SM2, and its security is proven under the random oracle model. The signcryption scheme is then implemented in the VANET system and integrated with blockchain smart contract technology to manage vehicle certificates. This VANET protocol guarantees both data confidentiality and authenticity in a single logical step, while also achieving rapid tracking and revocation of malicious vehicles. The introduction of smart contract further improves the transparency and credibility of the system. Experimental analysis demonstrates that, compared with the existing work, the proposed protocol reduces the communication overhead by about 12% without significantly increasing the computational overhead.
Elliptic Curve Cryptography is a class of public-key cryptographic algorithms with exponential attack difficulty, widely used in fields such as the encryption of second-generation ID cards. Shor’s algorithm theoretically poses a fatal threat to public-key cryptography, but to date, there have been no reports in the open literature on successful applications of quantum algorithms to attack ECC. In response to the current gap in quantum algorithms for ECC attacks, this paper proposes a quantum annealing-based algorithm for attacking the Elliptic Curve Discrete Logarithm Problem over finite fields. The approach begins by optimizing the coefficients in the Ising model conversion process during quantum annealing, significantly reducing the weight and coupling strengths (hi and Ji,j) of the relevant qubits by over 89.02%. By using quantum annealing to solve the Ising model optimized with the Semaev summation polynomial, the energy gap during the annealing process is greatly reduced, thereby revealing the relationships between points on the elliptic curve. Next, a sufficient number of Semaev polynomials are solved, and the resulting relationships are transformed into a system of linear equations. For this transformed linear system, a new algorithm based on quantum annealing is proposed for solving linear equations, enabling the solution of underdetermined and non-square linear systems. Ultimately, this work successfully solves the ECDLP over a finite field of up to 10-bits using the D-Wave Advantage, achieving a finite field size that is 289% larger than that of the previous largest solution. Experimental results show that the proposed method can effectively reduce the solution difficulty of D-Wave quantum annealing, and is a new quantum algorithm that can effectively attack ECDLP.