2014 Vol. 43, No. 2
2014, 43(2): 162-166.
doi: 10.3969/j.issn.1001-0548.2014.02.001
Abstract:
In order to increase the speed of collaborative filtering recommendation in social networks, an improved nearest-neighbor algorithm is proposed in this paper. The proof of its correctness is also given in detail. The similarity measurement between users is based on trust relationship by using shortest path method. Layered graph and dynamic programming are applied to calculate the similarity. Furthermore, the recommendation speed can also be improved by limiting the depth of relationship chain in practical applications of social networks. The Comparative simulations are carried out based on the KDD Cup 2012 Track datasets. The results show that the better balance between the accuracy and the recommendation efficiency can be achieved by the proposed algorithm.
In order to increase the speed of collaborative filtering recommendation in social networks, an improved nearest-neighbor algorithm is proposed in this paper. The proof of its correctness is also given in detail. The similarity measurement between users is based on trust relationship by using shortest path method. Layered graph and dynamic programming are applied to calculate the similarity. Furthermore, the recommendation speed can also be improved by limiting the depth of relationship chain in practical applications of social networks. The Comparative simulations are carried out based on the KDD Cup 2012 Track datasets. The results show that the better balance between the accuracy and the recommendation efficiency can be achieved by the proposed algorithm.
2014, 43(2): 167-173.
doi: 10.3969/j.issn.1001-0548.2014.02.002
Abstract:
In this paper, we try to distinguish between the two impacts from media and sociability for information spreading. We analyze the forwarding behavior in Sina Weibo and find that the large-scale microblogs' forwarding chain spread path presents an obvious starlike structure, in which the big degree nodes play a very important role in promoting information spread scale and velocity. In addition, we observe that friend forwarding behavior can improve the user's forwarding probability, and the impact is greater from the reciprocal links.
In this paper, we try to distinguish between the two impacts from media and sociability for information spreading. We analyze the forwarding behavior in Sina Weibo and find that the large-scale microblogs' forwarding chain spread path presents an obvious starlike structure, in which the big degree nodes play a very important role in promoting information spread scale and velocity. In addition, we observe that friend forwarding behavior can improve the user's forwarding probability, and the impact is greater from the reciprocal links.
2014, 43(2): 174-183.
doi: 10.3969/j.issn.1001-0548.2014.02.003
Abstract:
According to the different levels of user network structure, the paper introduces several metrics of studying microblog user network structures at the micro, macro and medium levels. Furthermore, we analyze the merits and defects and, furthermore, we present some possible research directions. This paper could contribute to a fast comprehension of current research progress of microblog user network structure and provide a foundation for the researchers.
According to the different levels of user network structure, the paper introduces several metrics of studying microblog user network structures at the micro, macro and medium levels. Furthermore, we analyze the merits and defects and, furthermore, we present some possible research directions. This paper could contribute to a fast comprehension of current research progress of microblog user network structure and provide a foundation for the researchers.
2014, 43(2): 184-187.
doi: 10.3969/j.issn.1001-0548.2014.02.004
Abstract:
The existing analyses of multi-tone jamming (MTJ) are mainly focused on the frequency hopping system, and assume that the jamming signal is time-invariant. In this paper, the bit error rate (BER) performance of the time division-synchronization code division multiple access (TD-SCDMA) system is analyzed in the presence of time-varying multi-tone jamming. The practical scenario where MTJ signal experiences time-varying Rayleigh fading channel is considered. The effect of the Doppler shift on the performance is studied. The BER expression is derived and validated by computer simulations. The simulation results show that the performance degrades as the Doppler shift increases. The study provides theoretical basis for jamming and anti-jamming technology.
The existing analyses of multi-tone jamming (MTJ) are mainly focused on the frequency hopping system, and assume that the jamming signal is time-invariant. In this paper, the bit error rate (BER) performance of the time division-synchronization code division multiple access (TD-SCDMA) system is analyzed in the presence of time-varying multi-tone jamming. The practical scenario where MTJ signal experiences time-varying Rayleigh fading channel is considered. The effect of the Doppler shift on the performance is studied. The BER expression is derived and validated by computer simulations. The simulation results show that the performance degrades as the Doppler shift increases. The study provides theoretical basis for jamming and anti-jamming technology.
2014, 43(2): 188-193.
doi: 10.3969/j.issn.1001-0548.2014.02.005
Abstract:
Aggregate signatures allow an efficient algorithm to aggregate n signatures of n distinct messages from n different signers into one single signature. Aggregate signature is useful to save bandwidth and improve the efficiency in verification phase. Certificateless public key cryptography overcomes the complicated certificate management in traditional public key cryptography and key escrow problem in identity based cryptography. In this paper, we present a new efficient certificateless aggregate signature scheme based on the bilinear pairing. The analysis shows that the proposed scheme is proven existentially unforgeable against adaptive chosen message attacks under the computational Diffie-Hellman assumption in the random oracle model. The signature length is only two group elements, which is independent of the number of signers, and the signature needs only four pairings and n scalar multiplications computations in verification phase. Thus, the proposed scheme is more suitable for the applications, in resource-constrained environment.
Aggregate signatures allow an efficient algorithm to aggregate n signatures of n distinct messages from n different signers into one single signature. Aggregate signature is useful to save bandwidth and improve the efficiency in verification phase. Certificateless public key cryptography overcomes the complicated certificate management in traditional public key cryptography and key escrow problem in identity based cryptography. In this paper, we present a new efficient certificateless aggregate signature scheme based on the bilinear pairing. The analysis shows that the proposed scheme is proven existentially unforgeable against adaptive chosen message attacks under the computational Diffie-Hellman assumption in the random oracle model. The signature length is only two group elements, which is independent of the number of signers, and the signature needs only four pairings and n scalar multiplications computations in verification phase. Thus, the proposed scheme is more suitable for the applications, in resource-constrained environment.
2014, 43(2): 194-197.
doi: 10.3969/j.issn.1001-0548.2014.02.006
Abstract:
For the analysis of near-field transmission characters of the hypersonic plasma sheath, layered media treatment is often applied to simplify the problem. However, it can only deal with the media with relatively simple distributions. For complex hypersonic flow field of real applications, this treatment will bring significant errors. In this paper, a frequency-dependent finite-difference time-domain method is utilized to accurately analyze the transmission characters of heterogeneous plasma sheath. Meanwhile, based on the numerical results, the basic tendency of the transmission process is further explored, which will also be valuable for practical applications.
For the analysis of near-field transmission characters of the hypersonic plasma sheath, layered media treatment is often applied to simplify the problem. However, it can only deal with the media with relatively simple distributions. For complex hypersonic flow field of real applications, this treatment will bring significant errors. In this paper, a frequency-dependent finite-difference time-domain method is utilized to accurately analyze the transmission characters of heterogeneous plasma sheath. Meanwhile, based on the numerical results, the basic tendency of the transmission process is further explored, which will also be valuable for practical applications.
2014, 43(2): 198-202.
doi: 10.3969/j.issn.1001-0548.2014.02.007
Abstract:
The necessary and sufficient condition of late-time stability of time-domain magnetic field integral equations (TDMFIE) based marching on-in time (MOT) algorithm is obtained through theoretical derivation. Using the condition, the late-time stability of TDMFIE-MOT algorithm can be estimated accurately. Based on the condition, an improved algorithm is proposed to ensure that the transient currents are stable in the late-time. The stability condition is validated by numerical results. It is also validated that the improved algorithm is more accurate than TDMFIE-MOT algorithm.
The necessary and sufficient condition of late-time stability of time-domain magnetic field integral equations (TDMFIE) based marching on-in time (MOT) algorithm is obtained through theoretical derivation. Using the condition, the late-time stability of TDMFIE-MOT algorithm can be estimated accurately. Based on the condition, an improved algorithm is proposed to ensure that the transient currents are stable in the late-time. The stability condition is validated by numerical results. It is also validated that the improved algorithm is more accurate than TDMFIE-MOT algorithm.
2014, 43(2): 203-206.
doi: 10.3969/j.issn.1001-0548.2014.02.008
Abstract:
The use of maximally sparse array to achieve the desired radiation pattern is a key problem in sparse array synthesis theory. Since the array antenna has intrinsic character of sparsity, the sparse array synthesis problem can be seen as a process of sparse signal reconstruction. In this paper, the problem of beamforming with antenna array is proved to be the same as the optimization problem of solving sparse signal vector. By using the focal undetermined system solver (FOCUSS) algorithm, maximally sparse array as well as the elements' locations and amplitudes are achieved quickly and accurately. The efficiency of this method is proved by theoretical analyses and numerical simulations.
The use of maximally sparse array to achieve the desired radiation pattern is a key problem in sparse array synthesis theory. Since the array antenna has intrinsic character of sparsity, the sparse array synthesis problem can be seen as a process of sparse signal reconstruction. In this paper, the problem of beamforming with antenna array is proved to be the same as the optimization problem of solving sparse signal vector. By using the focal undetermined system solver (FOCUSS) algorithm, maximally sparse array as well as the elements' locations and amplitudes are achieved quickly and accurately. The efficiency of this method is proved by theoretical analyses and numerical simulations.
2014, 43(2): 207-211.
doi: 10.3969/j.issn.1001-0548.2014.02.009
Abstract:
In the presence of multi-false-target deception jamming, a single station radar has no perfect performance in anti-jamming with its angle of view single and the limitation of the number of measured dots. For this problem, a true or false target recognition method is proposed for concentrative radar network in this paper, which fuses the measurements by spatial-correlation processing technique based on the location and velocity of the measurements. For the association sequence of measurements, measurements fusion based on location is used first. Then, the velocity vector is calculated and applied to fusion, so as to eliminate false-target from the result of fusion based on location. The deceived probability of radar network is made lower without the decrease of recognition probability of the true target.
In the presence of multi-false-target deception jamming, a single station radar has no perfect performance in anti-jamming with its angle of view single and the limitation of the number of measured dots. For this problem, a true or false target recognition method is proposed for concentrative radar network in this paper, which fuses the measurements by spatial-correlation processing technique based on the location and velocity of the measurements. For the association sequence of measurements, measurements fusion based on location is used first. Then, the velocity vector is calculated and applied to fusion, so as to eliminate false-target from the result of fusion based on location. The deceived probability of radar network is made lower without the decrease of recognition probability of the true target.
2014, 43(2): 212-215.
Abstract:
Based on the turbulence model, this paper proposes an approach of odor source localization which is designed to find the static odor source in stable wind field. The spatially distributed sensors array is adopted to monitor odor concentration on multiple spots. And the odor source can be localized depending on the measuring results of sensors array. The optimal solution of the source position is estimated with location algorithm in a predetermined range. By using traversal calculation, the odor source localization problem is converted into the optimal solution based on least-square. Comparing with other odor source location algorithms, the new algorithm can operate without known wind speed and diffusion coefficient. And the traversal calculation avoids being trapped in local optimum. Simulation experiments are performed to verify the localization algorithm. The metal oxide semiconductor sensors array and the sampling system are designed. And the ethanol odor localization experiments are operated in the laboratory.
Based on the turbulence model, this paper proposes an approach of odor source localization which is designed to find the static odor source in stable wind field. The spatially distributed sensors array is adopted to monitor odor concentration on multiple spots. And the odor source can be localized depending on the measuring results of sensors array. The optimal solution of the source position is estimated with location algorithm in a predetermined range. By using traversal calculation, the odor source localization problem is converted into the optimal solution based on least-square. Comparing with other odor source location algorithms, the new algorithm can operate without known wind speed and diffusion coefficient. And the traversal calculation avoids being trapped in local optimum. Simulation experiments are performed to verify the localization algorithm. The metal oxide semiconductor sensors array and the sampling system are designed. And the ethanol odor localization experiments are operated in the laboratory.
2014, 43(2): 216-221.
doi: 10.3969/j.issn.1001-0548.2014.02.011
Abstract:
The paper presents a model of whole magneto-optic current transformer (MOCT) based on symmetrical double-light circuit detection and Jones matrix. The research result indicates: the output voltage is two parts multiplying. One part is linear part, which is direct proportion to the primary current transient value. The other part is nonlinear part, which is related with the magnetic linear birefringence effect and Faraday Effect. In ideal MOCT system without magnetic linear birefringence effect, when the primary current transient value is small enough, the output voltage of MOCT is direct proportion to the primary current transient value. Simulation results verify the exactitude of conclusion.
The paper presents a model of whole magneto-optic current transformer (MOCT) based on symmetrical double-light circuit detection and Jones matrix. The research result indicates: the output voltage is two parts multiplying. One part is linear part, which is direct proportion to the primary current transient value. The other part is nonlinear part, which is related with the magnetic linear birefringence effect and Faraday Effect. In ideal MOCT system without magnetic linear birefringence effect, when the primary current transient value is small enough, the output voltage of MOCT is direct proportion to the primary current transient value. Simulation results verify the exactitude of conclusion.
2014, 43(2): 222-225.
doi: 10.3969/j.issn.1001-0548.2014.02.012
Abstract:
A general purpose amplifier for low-frequency weak signal acquisition fabricated in CSMC 0.5μm 2P3M process is presented. The amplifier adopts full differential structure with AC-coupled and capacitive feedback, the input resistor is boosted and bandwidth can be adjusted. The amplifier uses PMOS pseudo resistor to realize the adjustment of high pass cutoff frequency through adjusting gate voltage, which suits different low-frequency weak signal acquisition. The tested gain of the amplifier is 45.2dB. The high pass cutoff frequency can be adjusted from 1Hz to 10kHz. The equivalent input-referred noise within 100Hz and 7kHz is 17.8μV.
A general purpose amplifier for low-frequency weak signal acquisition fabricated in CSMC 0.5μm 2P3M process is presented. The amplifier adopts full differential structure with AC-coupled and capacitive feedback, the input resistor is boosted and bandwidth can be adjusted. The amplifier uses PMOS pseudo resistor to realize the adjustment of high pass cutoff frequency through adjusting gate voltage, which suits different low-frequency weak signal acquisition. The tested gain of the amplifier is 45.2dB. The high pass cutoff frequency can be adjusted from 1Hz to 10kHz. The equivalent input-referred noise within 100Hz and 7kHz is 17.8μV.
2014, 43(2): 226-230.
doi: 10.3969/j.issn.1001-0548.2014.02.013
Abstract:
A novel and exact adaptive slope compensation circuit is proposed in this paper. Compared with traditional adaptive slope compensation circuits, the proposed structure not only introduces input and output voltages but also does the product of the inductor current and the on-resistance of the power transistor, which achieves self-regulation and precise control of the compensation amount, thus improving the load capability and transient response of the DC-DC converter and forcefully guaranteeing the system stability. A boost dc-dc converter of 1.8×1.8 mm2 chip area using the proposed structure is designed and implemented with a 0.35μm CMOS process. Experimental results show that the converter is good in performance and the variation of maximum inductor current is less than 6.5% with the duty cycle ranging from 15% to 90%.
A novel and exact adaptive slope compensation circuit is proposed in this paper. Compared with traditional adaptive slope compensation circuits, the proposed structure not only introduces input and output voltages but also does the product of the inductor current and the on-resistance of the power transistor, which achieves self-regulation and precise control of the compensation amount, thus improving the load capability and transient response of the DC-DC converter and forcefully guaranteeing the system stability. A boost dc-dc converter of 1.8×1.8 mm2 chip area using the proposed structure is designed and implemented with a 0.35μm CMOS process. Experimental results show that the converter is good in performance and the variation of maximum inductor current is less than 6.5% with the duty cycle ranging from 15% to 90%.
2014, 43(2): 231-234.
doi: 10.3969/j.issn.1001-0548.2014.02.014
Abstract:
An efficient F-HMIPv6 protocol based on the cloud storage is proposed in this paper. Cloud storage servicer is introduced to manage care-of address(CoA) list for access router (AR) and mobile anchor point (MAP), and maintain the uniqueness and usability of addresses listed in it. CoA is applied for directly to the cloud storage servicer by AR and MAP, so the address detection operation is avoided, and AR and MAP do not need to maintain the addresses list. The protocol could reduce the length of message handover initiate(HI) and handover acknowledge(HACK). Compared with the standard F-HMIPv6 protocol, the protocol has the significant advantages of reducing the handover latency, saving the network's bandwidth, and increasing the protocol's practicability.
An efficient F-HMIPv6 protocol based on the cloud storage is proposed in this paper. Cloud storage servicer is introduced to manage care-of address(CoA) list for access router (AR) and mobile anchor point (MAP), and maintain the uniqueness and usability of addresses listed in it. CoA is applied for directly to the cloud storage servicer by AR and MAP, so the address detection operation is avoided, and AR and MAP do not need to maintain the addresses list. The protocol could reduce the length of message handover initiate(HI) and handover acknowledge(HACK). Compared with the standard F-HMIPv6 protocol, the protocol has the significant advantages of reducing the handover latency, saving the network's bandwidth, and increasing the protocol's practicability.
2014, 43(2): 235-240.
doi: 10.3969/j.issn.1001-0548.2014.02.015
Abstract:
In order to store data in the most suitable location, an adaptive data brokerage method is proposed in wireless sensor networks. First, a grid-based network model is established, and the relationship of location between acquiring node, sink and initial storage node is analyzed, the storage method can switch between the local storage and data-centric storage. The method is based on virtual extended grid and it can balance the total energy consumption. Compared with geographic Hash table (GHT), the simulation results show that the method can obtain better performance in terms of total energy consumption and network lifetime and packet loss rate.
In order to store data in the most suitable location, an adaptive data brokerage method is proposed in wireless sensor networks. First, a grid-based network model is established, and the relationship of location between acquiring node, sink and initial storage node is analyzed, the storage method can switch between the local storage and data-centric storage. The method is based on virtual extended grid and it can balance the total energy consumption. Compared with geographic Hash table (GHT), the simulation results show that the method can obtain better performance in terms of total energy consumption and network lifetime and packet loss rate.
2014, 43(2): 241-246.
doi: 10.3969/j.issn.1001-0548.2014.02.016
Abstract:
Relevant vector machine (RVM) is applied in network traffic classification. Firstly, experiment data is standardized, and then RVM is compared with other machine learning tools. Lastly, doubting interval is introduced to analyze predicted probability of classification, based on which a new hybrid traffic classification approach is proposed. Experiment studies illustrate that: 1) RVM excels the support vector machine (SVM) in three performances, and moreover, its classification accuracy is rather high in the situation of small sample circumstances; 2) probabilistic classification in doubting interval has a rather low classification accuracy while an accuracy above 98% outside doubting interval.
Relevant vector machine (RVM) is applied in network traffic classification. Firstly, experiment data is standardized, and then RVM is compared with other machine learning tools. Lastly, doubting interval is introduced to analyze predicted probability of classification, based on which a new hybrid traffic classification approach is proposed. Experiment studies illustrate that: 1) RVM excels the support vector machine (SVM) in three performances, and moreover, its classification accuracy is rather high in the situation of small sample circumstances; 2) probabilistic classification in doubting interval has a rather low classification accuracy while an accuracy above 98% outside doubting interval.
2014, 43(2): 247-251.
doi: 10.3969/j.issn.1001-0548.2014.02.017
Abstract:
Identification and classification of network traffic are an essential prerequisite of network traffic, network management, traffic engineering, and other applications. This paper mainly studies five typical Internet applications, including bitTorrent, HTTP, PPStream, QQ, and thunder, which occupy a majority of the Internet traffic. Analysis of data collected during different time periods shows that each application has a unique typical packet size distribution, and there are only minimal changes in packet size distribution among different durations. Moreover, the analysis on data of discrete sample proportions shows that with decreasing the number of samples, the packet size distribution patterns do not change significantly. With the decreasing of the total number of samples, the packet size distributions curve varies, but the overall shape and the trends have not changed substantially.
Identification and classification of network traffic are an essential prerequisite of network traffic, network management, traffic engineering, and other applications. This paper mainly studies five typical Internet applications, including bitTorrent, HTTP, PPStream, QQ, and thunder, which occupy a majority of the Internet traffic. Analysis of data collected during different time periods shows that each application has a unique typical packet size distribution, and there are only minimal changes in packet size distribution among different durations. Moreover, the analysis on data of discrete sample proportions shows that with decreasing the number of samples, the packet size distribution patterns do not change significantly. With the decreasing of the total number of samples, the packet size distributions curve varies, but the overall shape and the trends have not changed substantially.
2014, 43(2): 252-256.
doi: 10.3969/j.issn.1001-0548.2014.02.018
Abstract:
Video foreground segmentation is one of the key problems in the field of computer vision. It has important value in many applications, such as video surveillance, retrieval and event detection. Traditional video foreground segmentation algorithms are mainly designed for static scene and cannot competent in dynamic scenes. In this article, a novel video foreground segmentation method based on Gaussian mixture model (GMM) and optical flow residual is proposed. Firstly, the preliminary foreground region is estimated by GMM; then, the foreground region with dynamic texture is detected by optical flow residuals and removed; finally, morphology is utilized to refine the estimated foreground. Experimental evaluation shows that the proposed method can obtain more accurate foreground region in dynamic scenes compared with existing methods.
Video foreground segmentation is one of the key problems in the field of computer vision. It has important value in many applications, such as video surveillance, retrieval and event detection. Traditional video foreground segmentation algorithms are mainly designed for static scene and cannot competent in dynamic scenes. In this article, a novel video foreground segmentation method based on Gaussian mixture model (GMM) and optical flow residual is proposed. Firstly, the preliminary foreground region is estimated by GMM; then, the foreground region with dynamic texture is detected by optical flow residuals and removed; finally, morphology is utilized to refine the estimated foreground. Experimental evaluation shows that the proposed method can obtain more accurate foreground region in dynamic scenes compared with existing methods.
2014, 43(2): 257-261.
doi: 10.3969/j.issn.1001-0548.2014.02.019
Abstract:
Detection windows fusion is an important step of object detection based on sliding window. To overcome shortcomings of traditional detection fusion methods, this paper proposes a novel one. The method treats every preliminary window as a location in system, and heat conductivity between two locations is calculated by detection scores and overlapping area of corresponding windows. Finally, the detection windows fusion task is modeled by temperature maximization on linear anisotropic heat diffusion, of which the temperature maximization with finite K heat sources corresponds to K final windows. This paper obtains a near-optimal solution of objective function by a greedy algorithm. Experimental results on VOC2009 and INRIA pedestrian datasets show that our method not only deletes overlapping detections, but also rejects false positives and prevents interference between adjacent objects. Compared with traditional non maximum suppression, our method can obtain higher detection precision without loss of recall rates.
Detection windows fusion is an important step of object detection based on sliding window. To overcome shortcomings of traditional detection fusion methods, this paper proposes a novel one. The method treats every preliminary window as a location in system, and heat conductivity between two locations is calculated by detection scores and overlapping area of corresponding windows. Finally, the detection windows fusion task is modeled by temperature maximization on linear anisotropic heat diffusion, of which the temperature maximization with finite K heat sources corresponds to K final windows. This paper obtains a near-optimal solution of objective function by a greedy algorithm. Experimental results on VOC2009 and INRIA pedestrian datasets show that our method not only deletes overlapping detections, but also rejects false positives and prevents interference between adjacent objects. Compared with traditional non maximum suppression, our method can obtain higher detection precision without loss of recall rates.
2014, 43(2): 262-267.
doi: 10.3969/j.issn.1001-0548.2014.02.020
Abstract:
The credibility and precision of 3D reconstruction are sensitive to registration errors and rectification errors. To tackle this problem, we propose a high precision rectification-free narrow base-line 3D reconstruction algorithm. The high accuracy parallax is acquired by combining the phase-correlation and curve fitting method. A peak obvious matrix is obtained by inverse Fourier transform of normalized cross-power spectrum which is generated from Fourier transformations of two images. Measurements of disparity at sub-pixel level is obtained by curve fitting data near that peak. The base-line's projection on the imaging plane (called main-direction for short) is found by computing and normalizing the sum of all points' parallaxes. The 3D Euclidean reconstruction is obtained by projecting all points' parallaxes onto the main-direction because the depth value of each point is proportional to its projection. A large number of experiments on all kinds of scenes prove that the proposed algorithm can attain excellent reconstruction performance for stereo images with narrow baseline.
The credibility and precision of 3D reconstruction are sensitive to registration errors and rectification errors. To tackle this problem, we propose a high precision rectification-free narrow base-line 3D reconstruction algorithm. The high accuracy parallax is acquired by combining the phase-correlation and curve fitting method. A peak obvious matrix is obtained by inverse Fourier transform of normalized cross-power spectrum which is generated from Fourier transformations of two images. Measurements of disparity at sub-pixel level is obtained by curve fitting data near that peak. The base-line's projection on the imaging plane (called main-direction for short) is found by computing and normalizing the sum of all points' parallaxes. The 3D Euclidean reconstruction is obtained by projecting all points' parallaxes onto the main-direction because the depth value of each point is proportional to its projection. A large number of experiments on all kinds of scenes prove that the proposed algorithm can attain excellent reconstruction performance for stereo images with narrow baseline.
2014, 43(2): 268-271,286.
doi: 10.3969/j.issn.1001-0548.2014.02.021
Abstract:
By analyzing the problems of real-time scheduling in mix-criticality systems, a new scheduling algorithm, forward and backward time window partition-criticality factor prior (FBTWP-CFP), is proposed. It can separate the running time windows for the tasks of all criticality levels offline from the forward direction and backward direction and generate the idle windows for the tasks which get the lower priority by the criticality factor for the criticality level changed. The simulation results show that FBTWP-CFP is better than criticality assigned priority algorithm (CAPA) and own criticality based priority (OCBP) in completed tasks number and reduced miss deadline ratio.
By analyzing the problems of real-time scheduling in mix-criticality systems, a new scheduling algorithm, forward and backward time window partition-criticality factor prior (FBTWP-CFP), is proposed. It can separate the running time windows for the tasks of all criticality levels offline from the forward direction and backward direction and generate the idle windows for the tasks which get the lower priority by the criticality factor for the criticality level changed. The simulation results show that FBTWP-CFP is better than criticality assigned priority algorithm (CAPA) and own criticality based priority (OCBP) in completed tasks number and reduced miss deadline ratio.
2014, 43(2): 272-277.
doi: 10.3969/j.issn.1001-0548.2014.02.022
Abstract:
Analysis and modeling of application energy consumption plays a vital role in the optimization of energy consumption of smart mobile devices. An application energy model based on application's running time is proposed. Compared with application's component energy models with high accuracy and complexity, the model is characterized by the time variable and contains a variety of mobile devices' properties, such as power consumption and performance, which can be used to rapidly estimate the mobile devices' energy consumption during application execution, and the execution time of the application is easy to measure and obtain. The experiment results show that the average error rate of proposed model is 0.89%, 1.37% and 0.29% compared with that measured by component energy model provided by Android application framework on GT-I9108, GT-I9308 and GT-P3108, respectively. The model can be used to help the end users to rapidly and conveniently predict the battery energy consumption of applications.
Analysis and modeling of application energy consumption plays a vital role in the optimization of energy consumption of smart mobile devices. An application energy model based on application's running time is proposed. Compared with application's component energy models with high accuracy and complexity, the model is characterized by the time variable and contains a variety of mobile devices' properties, such as power consumption and performance, which can be used to rapidly estimate the mobile devices' energy consumption during application execution, and the execution time of the application is easy to measure and obtain. The experiment results show that the average error rate of proposed model is 0.89%, 1.37% and 0.29% compared with that measured by component energy model provided by Android application framework on GT-I9108, GT-I9308 and GT-P3108, respectively. The model can be used to help the end users to rapidly and conveniently predict the battery energy consumption of applications.
2014, 43(2): 278-281.
doi: 10.3969/j.issn.1001-0548.2014.02.023
Abstract:
A chaos electromagnetism-like method is proposed for the constrained optimization problems. The multi-objective technique is adopted to transform the constrained problems into unconstrained bi-objective optimization problems for constraints handling. The computation scheme of the charge and the force exerted on the particles were presented for the new model. In order to accelerate the convergence speed, the chaos optimization is incorporated to improve the particles. Simulation results on benchmark problems demonstrate that the proposed algorithm can find the global or approximate optimal solution quickly. Compared with the simulation results of existing algorithms, the new method is a competitive optimization method.
A chaos electromagnetism-like method is proposed for the constrained optimization problems. The multi-objective technique is adopted to transform the constrained problems into unconstrained bi-objective optimization problems for constraints handling. The computation scheme of the charge and the force exerted on the particles were presented for the new model. In order to accelerate the convergence speed, the chaos optimization is incorporated to improve the particles. Simulation results on benchmark problems demonstrate that the proposed algorithm can find the global or approximate optimal solution quickly. Compared with the simulation results of existing algorithms, the new method is a competitive optimization method.
2014, 43(2): 282-286.
doi: 10.3969/j.issn.1001-0548.2014.02.024
Abstract:
A novel low-power exponential current generator used for VGA is proposed in this paper. This generator adopts a MOSFET biased in sub-threshold region to generate exponential current. To compensate the variation of process and temperature, a threshold detector and a system solution are proposed. The exponential current generator is verified in TSMC 0.18μm standard CMOS technology. Within a ±0.41 dB error, the dynamic range of the proposed exponential current generator is 30 dB. The power consumption is 11μW, and the minimum power source voltage is 0.9V.
A novel low-power exponential current generator used for VGA is proposed in this paper. This generator adopts a MOSFET biased in sub-threshold region to generate exponential current. To compensate the variation of process and temperature, a threshold detector and a system solution are proposed. The exponential current generator is verified in TSMC 0.18μm standard CMOS technology. Within a ±0.41 dB error, the dynamic range of the proposed exponential current generator is 30 dB. The power consumption is 11μW, and the minimum power source voltage is 0.9V.
2014, 43(2): 287-291.
doi: 10.3969/j.issn.1001-0548.2014.02.025
Abstract:
In order to implement a high performance layer 2 switch address lookup function with minimum hardware consumption, a 10-bit Hash algorithm consisting of registers and XoR gates was presented by analyzing the characteristics of switch chip address table and cyclic redundancy check algorithm. 48-bit physical address is transferred into 10-bit lookup address in parallel and the address table of 1 024 storage depth can be quickly and accurately searched. The layer 2 switch chip using this address lookup algorithm can implement line speed exchange. The performance of the network equipments using the switch chips can be improved. The generated Hash addresses are uniformly distributed in the 10-bit address space. The performance of the algorithm was further verified by using switch circuit implemented on FPGA.
In order to implement a high performance layer 2 switch address lookup function with minimum hardware consumption, a 10-bit Hash algorithm consisting of registers and XoR gates was presented by analyzing the characteristics of switch chip address table and cyclic redundancy check algorithm. 48-bit physical address is transferred into 10-bit lookup address in parallel and the address table of 1 024 storage depth can be quickly and accurately searched. The layer 2 switch chip using this address lookup algorithm can implement line speed exchange. The performance of the network equipments using the switch chips can be improved. The generated Hash addresses are uniformly distributed in the 10-bit address space. The performance of the algorithm was further verified by using switch circuit implemented on FPGA.
2014, 43(2): 292-295.
doi: 10.3969/j.issn.1001-0548.2014.02.026
Abstract:
The effect of different temperature (below 1100℃) post-oxidation annealing in Ar atmosphere (Ar POA) on the densification of thermally grown SiO2 film on n-type 4H-SiC has been studied by reflective spectroscopic ellipsometry (SE) and Fourier transform infrared (FTIR) spectroscopy. The spectroscopic ellipsometry studies show that the 600℃ annealed SiO2 film has the highest refractive index of 1.47 and the lowest thickness of 84.63 nm in all samples. It is obtained from FTIR that 600℃ annealed sample has the highest LO phonon intensity, which may be attributed to the highest concentration of Si-O bonds. The leakage current-voltage measurement of Al/SiO2/SiC MOS capacitor was also performed. The leakage current is decreased by two orders of magnitude of the SiO2 thin film after annealing at 600℃. When a reverse bias voltage of 5V is applied, the reverse leakage current density is only 5×10-8 A/cm2. According to all studies, we conclude that annealing at 600℃ can greatly improve the compactness of thermally oxidized SiO2.
The effect of different temperature (below 1100℃) post-oxidation annealing in Ar atmosphere (Ar POA) on the densification of thermally grown SiO2 film on n-type 4H-SiC has been studied by reflective spectroscopic ellipsometry (SE) and Fourier transform infrared (FTIR) spectroscopy. The spectroscopic ellipsometry studies show that the 600℃ annealed SiO2 film has the highest refractive index of 1.47 and the lowest thickness of 84.63 nm in all samples. It is obtained from FTIR that 600℃ annealed sample has the highest LO phonon intensity, which may be attributed to the highest concentration of Si-O bonds. The leakage current-voltage measurement of Al/SiO2/SiC MOS capacitor was also performed. The leakage current is decreased by two orders of magnitude of the SiO2 thin film after annealing at 600℃. When a reverse bias voltage of 5V is applied, the reverse leakage current density is only 5×10-8 A/cm2. According to all studies, we conclude that annealing at 600℃ can greatly improve the compactness of thermally oxidized SiO2.
2014, 43(2): 296-300.
doi: 10.3969/j.issn.1001-0548.2014.02.027
Abstract:
Due to a very close link between antisocial personality disorder (ASPD) and criminal behavior, understanding the pathophysiology of ASPD is an international imperative. The objective of the present study is to develop a method of multivariate pattern analysis and investigate the altered functional connectivity patterns of ASPD by using rest-state functional magnetic resonance (MRI). Our results show that multivariate pattern analysis can provides accurate classification between ASPD and control subjects, and the ASPD is motivated from the uncoupling among the default mode network, the attention network, the visual recognition network, and the cerebellar network. Moreover, the method can succeed to extract altered information of ASPD and provide the first evidence for the altered brain's functional connections in ASPD.
Due to a very close link between antisocial personality disorder (ASPD) and criminal behavior, understanding the pathophysiology of ASPD is an international imperative. The objective of the present study is to develop a method of multivariate pattern analysis and investigate the altered functional connectivity patterns of ASPD by using rest-state functional magnetic resonance (MRI). Our results show that multivariate pattern analysis can provides accurate classification between ASPD and control subjects, and the ASPD is motivated from the uncoupling among the default mode network, the attention network, the visual recognition network, and the cerebellar network. Moreover, the method can succeed to extract altered information of ASPD and provide the first evidence for the altered brain's functional connections in ASPD.
2014, 43(2): 301-306.
doi: 10.3969/j.issn.1001-0548.2014.02.028
Abstract:
Extreme learning machine (ELM) is a typical SLFN (single layer feedback network) and its efficiency has been proved by many literatures for pattern recognitions. In this paper, ELM is applied in lie detection for the first time in order to overcome the disadvantages of the current lie detection methods such as lower accuracy and slower training speed. ELM is used as a classifier to classify the guilty and innocent subjects. The experimental result is compared with support vector machine (SVM), artificial neural network (ANN) and fisher discrimination analysis (FDA). The comparison results show that the proposed method obtains the highest training and testing accuracy with the fastest training speed.
Extreme learning machine (ELM) is a typical SLFN (single layer feedback network) and its efficiency has been proved by many literatures for pattern recognitions. In this paper, ELM is applied in lie detection for the first time in order to overcome the disadvantages of the current lie detection methods such as lower accuracy and slower training speed. ELM is used as a classifier to classify the guilty and innocent subjects. The experimental result is compared with support vector machine (SVM), artificial neural network (ANN) and fisher discrimination analysis (FDA). The comparison results show that the proposed method obtains the highest training and testing accuracy with the fastest training speed.
2014, 43(2): 307-310.
doi: 10.3969/j.issn.1001-0548.2014.02.029
Abstract:
Elastography based on pre-and post-compression signal window matching will generate amplitude modulation (AM) noise because of signal amplitude random fluctuation. On the basis of displacement location estimation, this paper proposes a displacement field correction (DFC) method to suppress AM noise. Firstly, the displacement locations are estimated by using the displacement location estimation algorithm. Then the displacement estimates are corrected by using the linear interpolation on the basis of displacement location estimates. Finally, the strains are estimated by using gradient operation and mapped into a grayscale image. Elastogram using the displacement field correction method has less AM noise significantly. The proposed method has the highest performance than signal amplitude log-compression (ALC) and amplitude modulation correction (AMC) at various strains or window-lengths. The DFC based on displacement location estimation can efficiently suppress AM noise.
Elastography based on pre-and post-compression signal window matching will generate amplitude modulation (AM) noise because of signal amplitude random fluctuation. On the basis of displacement location estimation, this paper proposes a displacement field correction (DFC) method to suppress AM noise. Firstly, the displacement locations are estimated by using the displacement location estimation algorithm. Then the displacement estimates are corrected by using the linear interpolation on the basis of displacement location estimates. Finally, the strains are estimated by using gradient operation and mapped into a grayscale image. Elastogram using the displacement field correction method has less AM noise significantly. The proposed method has the highest performance than signal amplitude log-compression (ALC) and amplitude modulation correction (AMC) at various strains or window-lengths. The DFC based on displacement location estimation can efficiently suppress AM noise.
2014, 43(2): 311-314.
doi: 10.3969/j.issn.1001-0548.2014.02.030
Abstract:
Electric vehicles (EVs) are developed rapidly due to the energy problem and the environment problem, and lithium-ion batteries play an important role in energy storage system of EVs. The characteristics of power lithium-ion batteries are closely connected to ambient temperature. The discharge capacity characteristic and the state of charge-open circuit voltage (SOC-OCV) curve are important parameters to represent the performance of power batteries and to design battery management systems (BMS). The experiments of 18650 cells and packs are carried out for the laws between SOC and OCV, also for the discharge capacity with different rates and ambient temperatures.
Electric vehicles (EVs) are developed rapidly due to the energy problem and the environment problem, and lithium-ion batteries play an important role in energy storage system of EVs. The characteristics of power lithium-ion batteries are closely connected to ambient temperature. The discharge capacity characteristic and the state of charge-open circuit voltage (SOC-OCV) curve are important parameters to represent the performance of power batteries and to design battery management systems (BMS). The experiments of 18650 cells and packs are carried out for the laws between SOC and OCV, also for the discharge capacity with different rates and ambient temperatures.
2014, 43(2): 315-320.
doi: 10.3969/j.issn.1001-0548.2014.02.031
Abstract:
An integrated quality diagnosis method is proposed to detect the input parameters. It can diagnose both the output quality and input quality in a multiple-input-multiple-output (MIMO) manufacturing process. This integrated diagnosis method overcomes the deficiencies of traditional quality control and diagnosis method that can only diagnosis the output quality of manufacturing process. It can detect the input parameters of the manufacturing process and provide sensitivities analysis results for adjustment of input parameter. The quality out of control situation can be firstly detected by the establishment of residual error T2 control chart. Then, the origin output quality parameters that arouse the process quality anomaly can be found out by BN-MTY approach. It integrated the Bayesian network and MYT theory to estimate the origin output quality parameters through the decomposition of residual error of T2 control chart. Neural network and sensitivity analysis are used in the integrated network to get the weight and threshold value of nerve cell in the forecasting network. They are applied to calculate the sensitivities of input parameters to the root output quality by sensitivities computational formula. Sensitivities represent the importance of the input parameters to the output quality failure. This integrated quality diagnosis method can both diagnose the output quality characteristics and the input parameters.
An integrated quality diagnosis method is proposed to detect the input parameters. It can diagnose both the output quality and input quality in a multiple-input-multiple-output (MIMO) manufacturing process. This integrated diagnosis method overcomes the deficiencies of traditional quality control and diagnosis method that can only diagnosis the output quality of manufacturing process. It can detect the input parameters of the manufacturing process and provide sensitivities analysis results for adjustment of input parameter. The quality out of control situation can be firstly detected by the establishment of residual error T2 control chart. Then, the origin output quality parameters that arouse the process quality anomaly can be found out by BN-MTY approach. It integrated the Bayesian network and MYT theory to estimate the origin output quality parameters through the decomposition of residual error of T2 control chart. Neural network and sensitivity analysis are used in the integrated network to get the weight and threshold value of nerve cell in the forecasting network. They are applied to calculate the sensitivities of input parameters to the root output quality by sensitivities computational formula. Sensitivities represent the importance of the input parameters to the output quality failure. This integrated quality diagnosis method can both diagnose the output quality characteristics and the input parameters.