2012 Vol. 41, No. 2
2012, 41(2): 163-175.
doi: 10.3969/j.issn.1001-0548.2012.02.001
Abstract:
In this article, the existed evaluation metrics for recommender systems are reviewed and the new progresses in this field are summarized from four aspects: accuracy, diversity, novelty and coverage. The merits, weaknesses and applicable conditions of different evaluation metrics are analized. The focus is concentrated on the importance of rank and some representative rank-sensitive metrics. The user-centric recommender systems are discussed and some important open problems are outlined as future possible directions.
In this article, the existed evaluation metrics for recommender systems are reviewed and the new progresses in this field are summarized from four aspects: accuracy, diversity, novelty and coverage. The merits, weaknesses and applicable conditions of different evaluation metrics are analized. The focus is concentrated on the importance of rank and some representative rank-sensitive metrics. The user-centric recommender systems are discussed and some important open problems are outlined as future possible directions.
2012, 41(2): 176-184.
doi: 10.3969/j.issn.1001-0548.2012.02.002
Abstract:
The problem of efficient path computation arises in many domains of practical importance. Classical algorithms failed to apply to large-scale networks due to their heavy computational complexity. This paper reviews some latest representative results classified according to the underlying speedup techniques, including basic speedup techniques like priority queues, goal-directed techniques, and hierarchical approaches. Moreover, the paper introduces the most recent achievement of the authors on network hierarchy construction and hierarchical algorithm design. Finally, some future directions are discussed.
The problem of efficient path computation arises in many domains of practical importance. Classical algorithms failed to apply to large-scale networks due to their heavy computational complexity. This paper reviews some latest representative results classified according to the underlying speedup techniques, including basic speedup techniques like priority queues, goal-directed techniques, and hierarchical approaches. Moreover, the paper introduces the most recent achievement of the authors on network hierarchy construction and hierarchical algorithm design. Finally, some future directions are discussed.
2012, 41(2): 185-191.
doi: 10.3969/j.issn.1001-0548.2012.02.003
Abstract:
The most popular modularity optimization may fail to identify communities smaller than a scale. A natural density of networks is proposed for describing the degree of interconnectedness of modules. The density modularity function is constructed to evaluate the community structure partitioning based on the natural density. Three cases study proves that the density modularity function can overcome the resolution limit of NG's modularity. The density modularity has been tested on both artificial networks and classical real-world networks. Computational results demonstrate the effectiveness of the density modularity.
The most popular modularity optimization may fail to identify communities smaller than a scale. A natural density of networks is proposed for describing the degree of interconnectedness of modules. The density modularity function is constructed to evaluate the community structure partitioning based on the natural density. Three cases study proves that the density modularity function can overcome the resolution limit of NG's modularity. The density modularity has been tested on both artificial networks and classical real-world networks. Computational results demonstrate the effectiveness of the density modularity.
2012, 41(2): 192-197.
doi: 10.3969/j.issn.1001-0548.2012.02.004
Abstract:
By using the approximate kernel DFT, an improved algorithm for detection and frequency estimation of multi-component sinusoidal signals is presented. A term for correct the frequency is constructed by using the real parts or the imaginary parts of the approximate kernel DFT coefficients, and a robust unsupervised threshold for detecting sinusoidal signals formed by using the ratio of the maximum value to median value of real parts or imaginany parts. The algorithm avoids the complex operations in the traditional correction and detection algorithms. Besides, a hardware implementation scheme, approximate kernel FFT is introduced and detection of the chirp signal is studied using the approximate kernel FFT as well. Examples are provided to illustrate the effectiveness of the presented algorithm.
By using the approximate kernel DFT, an improved algorithm for detection and frequency estimation of multi-component sinusoidal signals is presented. A term for correct the frequency is constructed by using the real parts or the imaginary parts of the approximate kernel DFT coefficients, and a robust unsupervised threshold for detecting sinusoidal signals formed by using the ratio of the maximum value to median value of real parts or imaginany parts. The algorithm avoids the complex operations in the traditional correction and detection algorithms. Besides, a hardware implementation scheme, approximate kernel FFT is introduced and detection of the chirp signal is studied using the approximate kernel FFT as well. Examples are provided to illustrate the effectiveness of the presented algorithm.
2012, 41(2): 198-202,237.
doi: 10.3969/j.issn.1001-0548.2012.02.005
Abstract:
The linearity of linear frequency modulated continuous wave synthetic aperture radar (LFMCW SAR) is an important index to measure the linear frequency modulated (LFM) signal and can cause great effect on imaging quality of SAR. Aiming at nonlinearity of LFMCW and combining the characteristic of LFMCW SAR system, nonlinear frequency error is analyzed and sidebands effects of frequency fixing with different reference signal are discussed. An equivalent approach to transform nonlinear frequency error into addictive noise by virtue of phase errors is proposed and effect of equivalent noise on LFMCW SAR range resolution is analyzed through simulation. Results demonstrate that lower linearity can cause larger effect on beat frequency and signal-to-noise ratio (SNR). The proposed method can provide new thought for the elimination of nonlinearity influence based on signal processing.
The linearity of linear frequency modulated continuous wave synthetic aperture radar (LFMCW SAR) is an important index to measure the linear frequency modulated (LFM) signal and can cause great effect on imaging quality of SAR. Aiming at nonlinearity of LFMCW and combining the characteristic of LFMCW SAR system, nonlinear frequency error is analyzed and sidebands effects of frequency fixing with different reference signal are discussed. An equivalent approach to transform nonlinear frequency error into addictive noise by virtue of phase errors is proposed and effect of equivalent noise on LFMCW SAR range resolution is analyzed through simulation. Results demonstrate that lower linearity can cause larger effect on beat frequency and signal-to-noise ratio (SNR). The proposed method can provide new thought for the elimination of nonlinearity influence based on signal processing.
2012, 41(2): 203-207.
doi: 10.3969/j.issn.1001-0548.2012.02.006
Abstract:
This paper studies the weak signal detection by using Duffing oscillator. It is not only of the theoretical value but also of the engineering value to decide the existence of the weak signal in strong noise environment quickly and exactly. Considering that most of the existing weak signal detection methods based on Duffing oscillator are qualitative analysis, this paper uses Hamiltonian to construct statistic for quantitative decision. Hamiltonian can depict the dynamics in real time, and it can be used to do decision in short time situation. This paper utilizes mean pseudo Hamiltonian to depict the changing of the states of Duffing systems in real time. Our scheme can detect weak signal in lower signal-noise-ratio quickly. Simulation verifies the effectiveness of the proposed scheme.
This paper studies the weak signal detection by using Duffing oscillator. It is not only of the theoretical value but also of the engineering value to decide the existence of the weak signal in strong noise environment quickly and exactly. Considering that most of the existing weak signal detection methods based on Duffing oscillator are qualitative analysis, this paper uses Hamiltonian to construct statistic for quantitative decision. Hamiltonian can depict the dynamics in real time, and it can be used to do decision in short time situation. This paper utilizes mean pseudo Hamiltonian to depict the changing of the states of Duffing systems in real time. Our scheme can detect weak signal in lower signal-noise-ratio quickly. Simulation verifies the effectiveness of the proposed scheme.
2012, 41(2): 208-211,226.
doi: 10.3969/j.issn.1001-0548.2012.02.007
Abstract:
The decimation filters of better performance is demanded in multi-rate process when wideband signal with high sample rate is processed by digital IF receiver. The frequency response of classic CIC decimation filters does not meet the requirement when it process wideband signal. The problem is that it has to increase pass-band attenuation to achieve the necessary stop-band attenuation with poor alias rejection. For solving the problem, the paper proposes an improved method which improves alias rejection by taking advantage of sharpening technique on the two-stage comb filter, and applies compensator filters in low sample rate with low computation to improve pass-band characteristics. The simulation result indicates that pass-band attenuation of proposed filter is over than -0.002dB, stop-band attenuation is less than -57dB, and anti-aliasing performance is better.
The decimation filters of better performance is demanded in multi-rate process when wideband signal with high sample rate is processed by digital IF receiver. The frequency response of classic CIC decimation filters does not meet the requirement when it process wideband signal. The problem is that it has to increase pass-band attenuation to achieve the necessary stop-band attenuation with poor alias rejection. For solving the problem, the paper proposes an improved method which improves alias rejection by taking advantage of sharpening technique on the two-stage comb filter, and applies compensator filters in low sample rate with low computation to improve pass-band characteristics. The simulation result indicates that pass-band attenuation of proposed filter is over than -0.002dB, stop-band attenuation is less than -57dB, and anti-aliasing performance is better.
2012, 41(2): 212-216.
doi: 10.3969/j.issn.1001-0548.2012.02.008
Abstract:
Conventional cross-correlation based time synchronization methods have sharp correlation peaks, which make it easy to get the exact time delay point. But their performances are vulnerable to the frequency offset between the transmitter and the receiver. By taking the constant amplitude zero auto corelation (CAZAC) sequences as the training sequences, a cross-correlation based time and frequency synchronization method is proposed, which can work with a large frequency offset. Compared with the conventional methods, no extra computational complexity is introduced in the process of searching the correlation peak. The estimation of the large frequency offset is also performed by sliding cross-correlation and is independent of the frequency offset. Simulations show the method is applicable to both AWGN channel and fading channel.
Conventional cross-correlation based time synchronization methods have sharp correlation peaks, which make it easy to get the exact time delay point. But their performances are vulnerable to the frequency offset between the transmitter and the receiver. By taking the constant amplitude zero auto corelation (CAZAC) sequences as the training sequences, a cross-correlation based time and frequency synchronization method is proposed, which can work with a large frequency offset. Compared with the conventional methods, no extra computational complexity is introduced in the process of searching the correlation peak. The estimation of the large frequency offset is also performed by sliding cross-correlation and is independent of the frequency offset. Simulations show the method is applicable to both AWGN channel and fading channel.
2012, 41(2): 217-221.
doi: 10.3969/j.issn.1001-0548.2012.02.009
Abstract:
As the popularization of 3G network, a mobile network automatic call testing system based on EVDO wireless modules is designed for Telecom. The hardware is designed by a way that soft card arrays combine with hard card arrays, and the EVDO wireless modules execute each testing task by connecting UIM cards of soft card arrays or hard card arrays. The test processes and test methods of the tasks are described in the software components. A completed test platform is built, and a user-friendly control interface is implemented to achieve automatic control. The experimental results show that the system can complete each testing task fast and exactly operated by single person. The design system meets the design requirements, and has been used in related fields.
As the popularization of 3G network, a mobile network automatic call testing system based on EVDO wireless modules is designed for Telecom. The hardware is designed by a way that soft card arrays combine with hard card arrays, and the EVDO wireless modules execute each testing task by connecting UIM cards of soft card arrays or hard card arrays. The test processes and test methods of the tasks are described in the software components. A completed test platform is built, and a user-friendly control interface is implemented to achieve automatic control. The experimental results show that the system can complete each testing task fast and exactly operated by single person. The design system meets the design requirements, and has been used in related fields.
2012, 41(2): 222-226.
doi: 10.3969/j.issn.1001-0548.2012.02.010
Abstract:
Multiple apertures sub-band synthetic technique can be realized for multiple-input and multiple-output SAR at the state-of-the-art level. Two kinds of frequency sub-band synthetic techniques are researched. A mathematical model for MIMO SAR is proposed and the feasibility is analyzed. The image process of MIMO SAR based on CS algorithm is presented. The computer simulations show the effectiveness of the method.
Multiple apertures sub-band synthetic technique can be realized for multiple-input and multiple-output SAR at the state-of-the-art level. Two kinds of frequency sub-band synthetic techniques are researched. A mathematical model for MIMO SAR is proposed and the feasibility is analyzed. The image process of MIMO SAR based on CS algorithm is presented. The computer simulations show the effectiveness of the method.
2012, 41(2): 227-231.
doi: 10.3969/j.issn.1001-0548.2012.02.011
Abstract:
An extrinsic optical fiber Fabry-Pérot sensor is fabricated by chemical etching of B-Ge co-doped optical fiber and electrical arc discharge. It is found that the etching rate of the fiber core is more than 30 times faster than that of the standard single-mode fiber when using 40% hydrofluoric acid as etching solution. Within the RI range of 1.350~1.400, the RI sensor has a sensitivity of ~30 dB/RIU and a resolution of ~3.33×10-5, with a linearity of >0.99. This low-cost method is suitable for mass production and has great potential for fabricating fiber-optic Fabry-Pérot sensors with long cavity lengths for use in spatial-frequency-division-multiplexing.
An extrinsic optical fiber Fabry-Pérot sensor is fabricated by chemical etching of B-Ge co-doped optical fiber and electrical arc discharge. It is found that the etching rate of the fiber core is more than 30 times faster than that of the standard single-mode fiber when using 40% hydrofluoric acid as etching solution. Within the RI range of 1.350~1.400, the RI sensor has a sensitivity of ~30 dB/RIU and a resolution of ~3.33×10-5, with a linearity of >0.99. This low-cost method is suitable for mass production and has great potential for fabricating fiber-optic Fabry-Pérot sensors with long cavity lengths for use in spatial-frequency-division-multiplexing.
2012, 41(2): 232-237.
doi: 10.3969/j.issn.1001-0548.2012.02.012
Abstract:
A serviced-oriented cross-layer survivability strategy is proposed based on GMPLS control plane (SCSG). The strategy provides the differentiated services are implemented by using regeneration node configuration scheme based on quality of transmission (QoT). The network survivability is guaranteed by using segment protection scheme based on regeneration node. The duplication of resource assignment and conflict of interlayer survivability strategy are avoided by using cross-layer route mapping based on GMPLS control plane. The results show that the proposed survivability strategy not only can provide differentiated services, but also can improve network survivability and resource utilization.
A serviced-oriented cross-layer survivability strategy is proposed based on GMPLS control plane (SCSG). The strategy provides the differentiated services are implemented by using regeneration node configuration scheme based on quality of transmission (QoT). The network survivability is guaranteed by using segment protection scheme based on regeneration node. The duplication of resource assignment and conflict of interlayer survivability strategy are avoided by using cross-layer route mapping based on GMPLS control plane. The results show that the proposed survivability strategy not only can provide differentiated services, but also can improve network survivability and resource utilization.
2012, 41(2): 238-241,304.
doi: 10.3969/j.issn.1001-0548.2012.02.013
Abstract:
Fused silica with different etching time were obtained by HF acid etching. The variation and distribution of inclusions were analyzed by synchrotron X-ray fluorescence spectrometry. The relative concentrations of main inclusions, such as Fe, Cu and Ce, are decreased with the increase of etching time. The AFM and Wyko images of the samples show that the surface roughness increase with the increase of etching time. The R:1 test procedure has been adapted to the pre-etching samples. The laser induced damage threshold (LIDT) increases within 30 min etching time and rapidly decreases to the threshold of non-etching fused silica level. The result indicates that Cu, Ce concentrations plays important role at low etching time, whereas when etching time is too long, the decreased LIDT is attributed to the increment of surface roughness.
Fused silica with different etching time were obtained by HF acid etching. The variation and distribution of inclusions were analyzed by synchrotron X-ray fluorescence spectrometry. The relative concentrations of main inclusions, such as Fe, Cu and Ce, are decreased with the increase of etching time. The AFM and Wyko images of the samples show that the surface roughness increase with the increase of etching time. The R:1 test procedure has been adapted to the pre-etching samples. The laser induced damage threshold (LIDT) increases within 30 min etching time and rapidly decreases to the threshold of non-etching fused silica level. The result indicates that Cu, Ce concentrations plays important role at low etching time, whereas when etching time is too long, the decreased LIDT is attributed to the increment of surface roughness.
2012, 41(2): 242-246.
doi: 10.3969/j.issn.1001-0548.2012.02.014
Abstract:
Expanding application of computer-aided design (CAD) technique to the particle-in-cell (PIC) simulation will greatly make a large space towards the practicability of PIC method. Using object-oriented and modularized design methods, a three-dimensional CAD system for PIC simulation is realized based on the analysis of the requirement of modeling complex vacuum electronic microwave sources, millimeter-wave sources, and terahertz sources. The CAD system can visually model various complex devices and generate the parameter description file for the kernel calculation. Using this system, a magnetically insulated transmission line oscillator (MILO) and an extended interaction oscillator (EIO) are modeled and simulated in cylindrical and Cartesian coordinates respectively. The results show the practicability of the three-dimensional CAD system for PIC simulation.
Expanding application of computer-aided design (CAD) technique to the particle-in-cell (PIC) simulation will greatly make a large space towards the practicability of PIC method. Using object-oriented and modularized design methods, a three-dimensional CAD system for PIC simulation is realized based on the analysis of the requirement of modeling complex vacuum electronic microwave sources, millimeter-wave sources, and terahertz sources. The CAD system can visually model various complex devices and generate the parameter description file for the kernel calculation. Using this system, a magnetically insulated transmission line oscillator (MILO) and an extended interaction oscillator (EIO) are modeled and simulated in cylindrical and Cartesian coordinates respectively. The results show the practicability of the three-dimensional CAD system for PIC simulation.
2012, 41(2): 247-252.
doi: 10.3969/j.issn.1001-0548.2012.02.015
Abstract:
There are rich scientific connotation and unique features in Terahertz (THz) spectrum, which lead to many important applications such as physics, chemistry, electric information, biomedicine, material science, astronomy, atmosphere and environment monitoring, communication and radar, national security and anti-terrorism, etc. THz waveguides suitable for different applications are urgent due to the strong vapor absorption of THz waves in the open air. Each THz waveguide has its most suitable applications according to its unique advantages and limitations, a rational choice will definitely facilitate the application of THz inscience research, manufactures, and lives efficiently. This paper presents an overview on research development of some typical waveguides such as THz metal cavity waveguides, metal wire waveguides, dielectric waveguides, and so on.
There are rich scientific connotation and unique features in Terahertz (THz) spectrum, which lead to many important applications such as physics, chemistry, electric information, biomedicine, material science, astronomy, atmosphere and environment monitoring, communication and radar, national security and anti-terrorism, etc. THz waveguides suitable for different applications are urgent due to the strong vapor absorption of THz waves in the open air. Each THz waveguide has its most suitable applications according to its unique advantages and limitations, a rational choice will definitely facilitate the application of THz inscience research, manufactures, and lives efficiently. This paper presents an overview on research development of some typical waveguides such as THz metal cavity waveguides, metal wire waveguides, dielectric waveguides, and so on.
2012, 41(2): 253-258.
doi: 10.3969/j.issn.1001-0548.2012.02.016
Abstract:
In this paper, Hilbert Huang transform (HHT) method is used to reduce vortex flow meter signal noise in oscillatory flow. The empirical mode decomposition (EMD)-scales filtering is used to erase vortex flow signal noise in oscillatory flow, and then the denoising effect of EMD-scales filter is compared with that of wavelet threshold filter using vortex flow meter signal noise in oscillatory flow. The signal simulation test shows that both EMD-scales filtering and wavelet threshold filtering can achieve good results, however, the former is more convenient and completely adaptive.
In this paper, Hilbert Huang transform (HHT) method is used to reduce vortex flow meter signal noise in oscillatory flow. The empirical mode decomposition (EMD)-scales filtering is used to erase vortex flow signal noise in oscillatory flow, and then the denoising effect of EMD-scales filter is compared with that of wavelet threshold filter using vortex flow meter signal noise in oscillatory flow. The signal simulation test shows that both EMD-scales filtering and wavelet threshold filtering can achieve good results, however, the former is more convenient and completely adaptive.
2012, 41(2): 259-264.
doi: 10.3969/j.issn.1001-0548.2012.02.017
Abstract:
A new method based on Visio drawing control component is proposed to solve hierarchical modeling problems for multi-signal models. An example is used to illustrate the modeling and analysis methods of multi-signal model. Then a new algorithm to generate the dependency matrix of faults and tests is proposed. Based on this method, a graphical system testability modeling and analysis software is developed and applied in a radar transmitter. The theoretical analysis and experiment results show this method is simple, efficiency and is valuable for system testability analysis and modeling.
A new method based on Visio drawing control component is proposed to solve hierarchical modeling problems for multi-signal models. An example is used to illustrate the modeling and analysis methods of multi-signal model. Then a new algorithm to generate the dependency matrix of faults and tests is proposed. Based on this method, a graphical system testability modeling and analysis software is developed and applied in a radar transmitter. The theoretical analysis and experiment results show this method is simple, efficiency and is valuable for system testability analysis and modeling.
2012, 41(2): 265-268.
doi: 10.3969/j.issn.1001-0548.2012.02.018
Abstract:
In recent years, Interent of Things had attracted wide attention around the world. An indoor environmental monitoring system is designed under framework of Interent of Things. The system is accomplished based on wireless sensor network and virtual instrument. The monitoring platform receives data from wireless sensors and communicates with a server through a local area network. The server displays real-time monitoring results and controlls external equipment with the aid of the monitoring platform. The wireless sensor nodes adoptes TinyOS, and the communication program of nodes is compiled in NesC. Based on virtual instrument, a man-machine interactive program is desgined in LabVIEW.
In recent years, Interent of Things had attracted wide attention around the world. An indoor environmental monitoring system is designed under framework of Interent of Things. The system is accomplished based on wireless sensor network and virtual instrument. The monitoring platform receives data from wireless sensors and communicates with a server through a local area network. The server displays real-time monitoring results and controlls external equipment with the aid of the monitoring platform. The wireless sensor nodes adoptes TinyOS, and the communication program of nodes is compiled in NesC. Based on virtual instrument, a man-machine interactive program is desgined in LabVIEW.
2012, 41(2): 269-273.
doi: 10.3969/j.issn.1001-0548.2012.02.019
Abstract:
Most of the existing optimal experimental design (OED) methods are based on either linear regression model or Laplacian regularized least square (LapRLS) model. This paper proposes a new active learning algorithm based on the second-order Hessian energy, which has the manifold learning capability. The algorithm selects those optimal samples which minimize the parameter covariance matrix of the Hessian regularized regression model, and overcomes the drawbacks of LapRLS. The experimental results on content-based image retrieval have demonstrated the effectiveness of the proposed approach.
Most of the existing optimal experimental design (OED) methods are based on either linear regression model or Laplacian regularized least square (LapRLS) model. This paper proposes a new active learning algorithm based on the second-order Hessian energy, which has the manifold learning capability. The algorithm selects those optimal samples which minimize the parameter covariance matrix of the Hessian regularized regression model, and overcomes the drawbacks of LapRLS. The experimental results on content-based image retrieval have demonstrated the effectiveness of the proposed approach.
2012, 41(2): 274-279.
doi: 10.3969/j.issn.1001-0548.2012.02.020
Abstract:
Recent studies on soft errors have focused on dynamic phase-based reliability management, with which system resources can be tuned at runtime according to different reliability characteristics of structures. Architectural vulnerability factor (AVF) is one of the most commonly used estimation metrics of a structure's vulnerability. An effective and fast AVF analysis framework is developed and the AVF values for several key microarchitectures are evaluated. Furthermore, we used basic block profile and performance metrics information are used to characterize the reliability-based phase behavior of several key microarchitectures, and then k-means clustering and regression tree algorithms are applied for reliability-based phase classification. Experimental results show that the combination of performance metrics and regression tree method is a good candidate for soft error vulnerability phase identification.
Recent studies on soft errors have focused on dynamic phase-based reliability management, with which system resources can be tuned at runtime according to different reliability characteristics of structures. Architectural vulnerability factor (AVF) is one of the most commonly used estimation metrics of a structure's vulnerability. An effective and fast AVF analysis framework is developed and the AVF values for several key microarchitectures are evaluated. Furthermore, we used basic block profile and performance metrics information are used to characterize the reliability-based phase behavior of several key microarchitectures, and then k-means clustering and regression tree algorithms are applied for reliability-based phase classification. Experimental results show that the combination of performance metrics and regression tree method is a good candidate for soft error vulnerability phase identification.
2012, 41(2): 280-284.
doi: 10.3969/j.issn.1001-0548.2012.02.021
Abstract:
In recent years, the rapid expansion of graphics processing unit (GPU) as well as the computer unified device architecture (CUDA) technology proposed by NVIDIA pushes forward the application of GPU in the field of high performance computing (HPC). In this paper, GPU's architecture and CUDA programming model are introduced first. According to the method of parallel program performance analysis in CPU cluster mode, a performance analysis tool for CUDA programs based on directive is designed and implemented. Experiment results validate the validity of this performance analysis tool on different GPU hardware platforms.
In recent years, the rapid expansion of graphics processing unit (GPU) as well as the computer unified device architecture (CUDA) technology proposed by NVIDIA pushes forward the application of GPU in the field of high performance computing (HPC). In this paper, GPU's architecture and CUDA programming model are introduced first. According to the method of parallel program performance analysis in CPU cluster mode, a performance analysis tool for CUDA programs based on directive is designed and implemented. Experiment results validate the validity of this performance analysis tool on different GPU hardware platforms.
2012, 41(2): 285-290.
doi: 10.3969/j.issn.1001-0548.2012.02.022
Abstract:
The distance prediction performances of landmark-based IP coordinate system are sensitive to the selection and placement of landmarks and the malicious behaviors of some hostile nodes, which have greatly impact on the accuracy and trustiness of the predicted distances for most of applications. This paper proposes a new landmark-based IP coordinate system with defend-capable malicious behaviors (for short LCSD), in which the selection and placement of landmarks are optimized through the distance matrix clustering scheme, the malicious behaviors of hostile nodes are prohibited to disturb the coordinate update of the normal nodes through the cooperative recommending trust-evaluation mechanism. Finally by some simulations, its performances are analyzed in terms of the relative error (RE), the neighbor approximation degree. The closest neighbor trust degree, and the results show that LCSD has much smaller RE on the whole, and outperforms ICS in the neighborhood in despite of the proportion of hostile nodes to the all nodes.
The distance prediction performances of landmark-based IP coordinate system are sensitive to the selection and placement of landmarks and the malicious behaviors of some hostile nodes, which have greatly impact on the accuracy and trustiness of the predicted distances for most of applications. This paper proposes a new landmark-based IP coordinate system with defend-capable malicious behaviors (for short LCSD), in which the selection and placement of landmarks are optimized through the distance matrix clustering scheme, the malicious behaviors of hostile nodes are prohibited to disturb the coordinate update of the normal nodes through the cooperative recommending trust-evaluation mechanism. Finally by some simulations, its performances are analyzed in terms of the relative error (RE), the neighbor approximation degree. The closest neighbor trust degree, and the results show that LCSD has much smaller RE on the whole, and outperforms ICS in the neighborhood in despite of the proportion of hostile nodes to the all nodes.
2012, 41(2): 291-298.
doi: 10.3969/j.issn.1001-0548.2012.02.023
Abstract:
Deep understanding of P2P overlay network topological characteristics is crucial for improving the performance, robustness, and scalability of P2P applications. In this paper, we adopt spectral analysis methods in the context of the measured Gnutella network topologies. The properties of spectral density, normalized Laplacian spectrum and sign-less Laplacian spectrum are analyzed in detail. The results indicate that the Gnutella overlay network is not scale-free network, which has developed over time following a different set of growth processes from those of the BA (Barabási-Albert) model. Furthermore, the network core of Gnutella overlays is stable, whose NLS and SLS can be treated as the "fingerprint" of the network so as to examine its health status in the face of large mass of nodes' failures. Finally, the power-law for the SLS as well as the two "fingerprint" of Gnutella overlays provides us a composite way to qualify the realism of the graphs generated by various P2P network models. Our findings as well as analysis techniques have broad applicability to P2P networks and provide useful detail insights into P2P overlay network structural properties.
Deep understanding of P2P overlay network topological characteristics is crucial for improving the performance, robustness, and scalability of P2P applications. In this paper, we adopt spectral analysis methods in the context of the measured Gnutella network topologies. The properties of spectral density, normalized Laplacian spectrum and sign-less Laplacian spectrum are analyzed in detail. The results indicate that the Gnutella overlay network is not scale-free network, which has developed over time following a different set of growth processes from those of the BA (Barabási-Albert) model. Furthermore, the network core of Gnutella overlays is stable, whose NLS and SLS can be treated as the "fingerprint" of the network so as to examine its health status in the face of large mass of nodes' failures. Finally, the power-law for the SLS as well as the two "fingerprint" of Gnutella overlays provides us a composite way to qualify the realism of the graphs generated by various P2P network models. Our findings as well as analysis techniques have broad applicability to P2P networks and provide useful detail insights into P2P overlay network structural properties.
2012, 41(2): 299-304.
doi: 10.3969/j.issn.1001-0548.2012.02.024
Abstract:
A method to reduce the property-oriented concept lattice of a formal context directly is proposed in this paper. The similarity degree between object sets and the similarity degree between attribute sets are firstly introduced, and then, object neighborhood and attribute neighborhood are created accordingly. The sizes of object neighborhood or the attribute neighborhood are adjusted by the similarity degrees, thus, the number of the property-oriented concepts can be controlled and the property-oriented concept lattice is compressed. Using this method, we can compress the property-oriented concept lattice from the viewpoints of object covering and attribute covering, and the most important is that the reduced lattice is a subset of the original one.
A method to reduce the property-oriented concept lattice of a formal context directly is proposed in this paper. The similarity degree between object sets and the similarity degree between attribute sets are firstly introduced, and then, object neighborhood and attribute neighborhood are created accordingly. The sizes of object neighborhood or the attribute neighborhood are adjusted by the similarity degrees, thus, the number of the property-oriented concepts can be controlled and the property-oriented concept lattice is compressed. Using this method, we can compress the property-oriented concept lattice from the viewpoints of object covering and attribute covering, and the most important is that the reduced lattice is a subset of the original one.
2012, 41(2): 305-310.
doi: 10.3969/j.issn.1001-0548.2012.02.025
Abstract:
Next-block predictors enable high efficiency control-flow speculation of EDGE architecture. This paper analyzes defects of next-block predictor based on O-GEHL prediction technology, and proposes two improvements: the exit prediction without chooser and the exit prediction with binary O-GEHL prediction. Performance evaluation results show that: the one without chooser in the exit prediction stage outperforms previously published one by 0.7%; the one using 8 conditional O-GEHL predictor improves the performance by 3% with the largest resource; the one using 4 conditional O-GEHL predictor only for the first 4 exits improves the performance by 2%.
Next-block predictors enable high efficiency control-flow speculation of EDGE architecture. This paper analyzes defects of next-block predictor based on O-GEHL prediction technology, and proposes two improvements: the exit prediction without chooser and the exit prediction with binary O-GEHL prediction. Performance evaluation results show that: the one without chooser in the exit prediction stage outperforms previously published one by 0.7%; the one using 8 conditional O-GEHL predictor improves the performance by 3% with the largest resource; the one using 4 conditional O-GEHL predictor only for the first 4 exits improves the performance by 2%.
2012, 41(2): 311-316.
doi: 10.3969/j.issn.1001-0548.2012.02.026
Abstract:
In this paper, based on solving Poisson equation to the structure of Si/SiGe pMOSFET with polycrystalline SiGe gate, its threshold voltage model and I-V electrical characteristic model are proposed. The secondary effects induced by scaling of the MOS device, such as drain-induced lowering barrier effect (DIBL), short-channel effect (SCE) and velocity overshoot effect, are also taken in account. By simulating the model with Matlab, the relationship between threshold voltage and relevant parameters, such as Ge content in P+ Poly SiGe gate, gate length, oxide thickness, Ge content in relax SiGe virtual substrates, doping concentration and drain bias, are obtained. The results of I-V characteristic shows that MOS device with strained Si as its channel has higher output characteristic. Finally, the evidence for the validity of our model is derived from the comparison of analytical results with the simulation data from the 2-D device simulator ISE. The proposed model can also be easily used for reasonable analysis and design of small-scaled Si/SiGe pMOSFET.
In this paper, based on solving Poisson equation to the structure of Si/SiGe pMOSFET with polycrystalline SiGe gate, its threshold voltage model and I-V electrical characteristic model are proposed. The secondary effects induced by scaling of the MOS device, such as drain-induced lowering barrier effect (DIBL), short-channel effect (SCE) and velocity overshoot effect, are also taken in account. By simulating the model with Matlab, the relationship between threshold voltage and relevant parameters, such as Ge content in P+ Poly SiGe gate, gate length, oxide thickness, Ge content in relax SiGe virtual substrates, doping concentration and drain bias, are obtained. The results of I-V characteristic shows that MOS device with strained Si as its channel has higher output characteristic. Finally, the evidence for the validity of our model is derived from the comparison of analytical results with the simulation data from the 2-D device simulator ISE. The proposed model can also be easily used for reasonable analysis and design of small-scaled Si/SiGe pMOSFET.
2012, 41(2): 317-320.
doi: 10.3969/j.issn.1001-0548.2012.02.027
Abstract:
A low ripple switched capacitor charge pump applied to phase change memory (PCM) is proposed. Compared with the conventional switched capacitor charge pump, the flying capacitor of the proposed charge pump is charged to the difference between the prospective output voltage and the input voltage during the charge phase. In the discharge phase, the flying capacitor is connected between the input and output of the charge pump to transfer energy to output, so the output is regulated at prospective voltage. This new operation mode can reduce the output ripple caused by charge redistribution. A simulation was implemented for a DC input range of 2.8~4.4V in a SMIC standard 0.18μm CMOS process. Result shows the new operation mode can regulate the output about 5V with a load condition from 0 to 10mA, the ripple voltage is lower than 3 mV and the highest power efficiency reaches 88%.
A low ripple switched capacitor charge pump applied to phase change memory (PCM) is proposed. Compared with the conventional switched capacitor charge pump, the flying capacitor of the proposed charge pump is charged to the difference between the prospective output voltage and the input voltage during the charge phase. In the discharge phase, the flying capacitor is connected between the input and output of the charge pump to transfer energy to output, so the output is regulated at prospective voltage. This new operation mode can reduce the output ripple caused by charge redistribution. A simulation was implemented for a DC input range of 2.8~4.4V in a SMIC standard 0.18μm CMOS process. Result shows the new operation mode can regulate the output about 5V with a load condition from 0 to 10mA, the ripple voltage is lower than 3 mV and the highest power efficiency reaches 88%.