2011 Vol. 40, No. 1
column
2011, 40(1): 2-10.
doi: 10.3969/j.issn.1001-0548.2011.01.001
Abstract:
Statistical learning theory is the statistical theory of smallsample, and it focuses on the statistical law and the nature of learning of small samples. Support vector machine is a new machine learning method based on statistical learning theory, and it has become the research field of machine learning because of its excellent performance. This paper describes the theoretical basis of support vector machines (SVM) systematically, sums up the mainstream machine training algorithms of traditional SVM and some new learning models and algorithms detailedly, and finally points out the research and development prospects of support vector machine.
Statistical learning theory is the statistical theory of smallsample, and it focuses on the statistical law and the nature of learning of small samples. Support vector machine is a new machine learning method based on statistical learning theory, and it has become the research field of machine learning because of its excellent performance. This paper describes the theoretical basis of support vector machines (SVM) systematically, sums up the mainstream machine training algorithms of traditional SVM and some new learning models and algorithms detailedly, and finally points out the research and development prospects of support vector machine.
2011, 40(1): 11-15.
doi: 10.3969/j.issn.1001-0548.2011.01.002
Abstract:
In this paper, the fast terminal sliding mode control for a class of singular systems is studied. The singular systems are transformed into restricted equivalent forms by a nonsingular linear transformation. By the method of Lyapunov function, a novel exponential fast terminal sliding mode control strategy is proposed. A special exponential terminal sliding mode hypersurface is given and a controller is designed correspondingly, such that the asymptotic stability of the closed-loop system is guaranteed, the motion of sliding mode is realized, and the system state variables can converge to the equilibrium point with a fast convergence rat in finite time. Finally, a numerical example is presented to illustrate the feasibility and effectiveness of the design.
In this paper, the fast terminal sliding mode control for a class of singular systems is studied. The singular systems are transformed into restricted equivalent forms by a nonsingular linear transformation. By the method of Lyapunov function, a novel exponential fast terminal sliding mode control strategy is proposed. A special exponential terminal sliding mode hypersurface is given and a controller is designed correspondingly, such that the asymptotic stability of the closed-loop system is guaranteed, the motion of sliding mode is realized, and the system state variables can converge to the equilibrium point with a fast convergence rat in finite time. Finally, a numerical example is presented to illustrate the feasibility and effectiveness of the design.
2011, 40(1): 16-19.
doi: 10.3969/j.issn.1001-0548.2011.01.003
Abstract:
According to the strong correlation between the transmit signal and received signal of UWB pulse through-wall radar, a cross-correlation back-propagation (BP) algorithm is proposed. The cross-correlation signals between received signals and transmit signal are first formed and then are used to generate their coherent imaging by using BP algorithm. The electromagnetic models for through wall scenarios are setup based on FDTD method. Imaging results of two methods show that the cross-correlation BP algorithm has great advantages than general BP algorithm at the aspects of improving the target imaging localization precision and the signal noise ratio of the imaging.
According to the strong correlation between the transmit signal and received signal of UWB pulse through-wall radar, a cross-correlation back-propagation (BP) algorithm is proposed. The cross-correlation signals between received signals and transmit signal are first formed and then are used to generate their coherent imaging by using BP algorithm. The electromagnetic models for through wall scenarios are setup based on FDTD method. Imaging results of two methods show that the cross-correlation BP algorithm has great advantages than general BP algorithm at the aspects of improving the target imaging localization precision and the signal noise ratio of the imaging.
2011, 40(1): 26-29.
doi: 10.3969/j.issn.1001-0548.2011.01.005
Abstract:
A new decomposition algorithm for inverse modified discrete cosine transform (IMDCT) computation is presented. The algorithm converses an N-point IMDCT to a pair of N/4-point type-IV discrete cosine transforms (DCT-IV/DCT-IV). Due to resource sharing, the implementation of DCT/DCT is hardware-efficient. Comparing with some well-known IMDCT algorithms, the proposed algorithm has higher computational efficiency (3 times higher) and requires 1 latch (20%), 4 adders (44%) and 3 multipliers (50%) less. To verify the proposed fast algorithm, a hardware accelerator based on the algorithm is designed and applied to the decoding of an AC-3 audio. The experimental results demonstrate that the AC-3 audio can be decoded in real time and therefore verify the practicability of the decomposition algorithm.
A new decomposition algorithm for inverse modified discrete cosine transform (IMDCT) computation is presented. The algorithm converses an N-point IMDCT to a pair of N/4-point type-IV discrete cosine transforms (DCT-IV/DCT-IV). Due to resource sharing, the implementation of DCT/DCT is hardware-efficient. Comparing with some well-known IMDCT algorithms, the proposed algorithm has higher computational efficiency (3 times higher) and requires 1 latch (20%), 4 adders (44%) and 3 multipliers (50%) less. To verify the proposed fast algorithm, a hardware accelerator based on the algorithm is designed and applied to the decoding of an AC-3 audio. The experimental results demonstrate that the AC-3 audio can be decoded in real time and therefore verify the practicability of the decomposition algorithm.
2011, 40(1): 30-35.
doi: 10.3969/j.issn.1001-0548.2011.01.006
Abstract:
Wireless sensor networks (WSNs) often consists of energy constrained sensor nodes and a data center, and data aggregation is often used to remove the redundant information from the data. However, no prior work studies the network lifetime under the aggregated data rate constraint. In this paper, a network flow model is proposed for data aggregated WSNs. By defining the loose factor for the aggregated data rate, the maximum network lifetime routing and the minimum aggregated data rate routing are combined. And a group of linear programming problems are designed to remove the loops in the routes. Extensive simulation results show the performances of the proposed routing algorithm. The relationships between the network lifetime and the aggregated data rate are also analyzed.
Wireless sensor networks (WSNs) often consists of energy constrained sensor nodes and a data center, and data aggregation is often used to remove the redundant information from the data. However, no prior work studies the network lifetime under the aggregated data rate constraint. In this paper, a network flow model is proposed for data aggregated WSNs. By defining the loose factor for the aggregated data rate, the maximum network lifetime routing and the minimum aggregated data rate routing are combined. And a group of linear programming problems are designed to remove the loops in the routes. Extensive simulation results show the performances of the proposed routing algorithm. The relationships between the network lifetime and the aggregated data rate are also analyzed.
2011, 40(1): 36-40.
doi: 10.3969/j.issn.1001-0548.2011.01.007
Abstract:
The fault detection mechanism based on network partition strategy is proposed in this paper. Firstly, the large-scale optical burst switching (OBS) network is partitioned into several subnet by network partition method, thus the fault monitoring can be achieved by the core nodes in each subnet respectively. Furthermore, with the probe module assigned in each subnet, the network fault can be detected by our cycle cover method. Numerical results show that the fault localization ratio can be improved significantly by our fault detection mechanism for the case of average node degree higher than 3.0; at the same time, the monitor cost can be reduced effectively.
The fault detection mechanism based on network partition strategy is proposed in this paper. Firstly, the large-scale optical burst switching (OBS) network is partitioned into several subnet by network partition method, thus the fault monitoring can be achieved by the core nodes in each subnet respectively. Furthermore, with the probe module assigned in each subnet, the network fault can be detected by our cycle cover method. Numerical results show that the fault localization ratio can be improved significantly by our fault detection mechanism for the case of average node degree higher than 3.0; at the same time, the monitor cost can be reduced effectively.
2011, 40(1): 41-46.
doi: 10.3969/j.issn.1001-0548.2011.01.008
Abstract:
Cognitive radio (CR) is an intelligent wireless communication system, and its core is to design cognitive engine. This paper presents a design scheme for cognitive engine based on least squares support vector Machine (LS-SVM) to adapt varied wireless environment according to experiential knowledge. A CR simulation cognitive engine is realized on 802.11a simulation platform by collecting data and training the classification and regression model of LS-SVM to learn channel characteristics. Under the premise of studying the sensaed information, a CR reconfiguration is realized according to the user' s needs. The simulation results show that the proposed cognitive engine can effectively realize the learning and reconfiguration function.
Cognitive radio (CR) is an intelligent wireless communication system, and its core is to design cognitive engine. This paper presents a design scheme for cognitive engine based on least squares support vector Machine (LS-SVM) to adapt varied wireless environment according to experiential knowledge. A CR simulation cognitive engine is realized on 802.11a simulation platform by collecting data and training the classification and regression model of LS-SVM to learn channel characteristics. Under the premise of studying the sensaed information, a CR reconfiguration is realized according to the user' s needs. The simulation results show that the proposed cognitive engine can effectively realize the learning and reconfiguration function.
2011, 40(1): 47-52.
doi: 10.3969/j.issn.1001-0548.2011.01.009
Abstract:
The slope-based fault modeling method is a good solution for soft and hard fault diagnosis problems. But it is not applicable to practical circuit fault diagnosis due to the influence of tolerance. Several tolerance handling methods are given in this paper. First, the public point of slope feature curves is not a fixed value but derived from the actual circuit under test CUT. Hence, the tolerance information is contained in the public point. Secondly, due to the influence of tolerance, the slope feature of a potential fault component is not a sole value but a fan-shaped area. This fan-shaped area can be obtained by using either analytical method or simulated method. The proposed methods solve the tolerance influence of slope-based fault diagnosis methods.
The slope-based fault modeling method is a good solution for soft and hard fault diagnosis problems. But it is not applicable to practical circuit fault diagnosis due to the influence of tolerance. Several tolerance handling methods are given in this paper. First, the public point of slope feature curves is not a fixed value but derived from the actual circuit under test CUT. Hence, the tolerance information is contained in the public point. Secondly, due to the influence of tolerance, the slope feature of a potential fault component is not a sole value but a fan-shaped area. This fan-shaped area can be obtained by using either analytical method or simulated method. The proposed methods solve the tolerance influence of slope-based fault diagnosis methods.
2011, 40(1): 53-57.
doi: 10.3969/j.issn.1001-0548.2011.01.010
Abstract:
A linear analog fault diagnosis approach by means of fault dictionary. is presented to overcome the three major defects in classic fault dictionaries. First, a fault dictionary using node-voltage-deviation-ratio vectors as fault signatures is constructed. The dictionary can detect both the hard and soft faults for each single faulty component in linear circuits, and has some significant features such as less simulation runs, minimal storage space and wide diagnosis scope. Second, how to extend some points of view in geometry to analog fault diagnosis area is discussed, and three geometric models based on the node-voltage-deviation-ratio vectors are designed for faults isolation in case of considering tolerances. Last, two examples are given to verify the methodology. The promising experimental results manifest the correctness and effectiveness of the approach proposed in this paper.
A linear analog fault diagnosis approach by means of fault dictionary. is presented to overcome the three major defects in classic fault dictionaries. First, a fault dictionary using node-voltage-deviation-ratio vectors as fault signatures is constructed. The dictionary can detect both the hard and soft faults for each single faulty component in linear circuits, and has some significant features such as less simulation runs, minimal storage space and wide diagnosis scope. Second, how to extend some points of view in geometry to analog fault diagnosis area is discussed, and three geometric models based on the node-voltage-deviation-ratio vectors are designed for faults isolation in case of considering tolerances. Last, two examples are given to verify the methodology. The promising experimental results manifest the correctness and effectiveness of the approach proposed in this paper.
2011, 40(1): 58-63.
doi: 10.3969/j.issn.1001-0548.2011.01.011
Abstract:
An improved simulation modeling approach of multi-signal model for analogue circuit is proposed. The Monte-carlo method is used in circuit simulation to acquire statistical data of circuit features. An empirical formula is adopted to determine the times of Monte-Carlo simulation for reducing the simulation' s time-cost. Then, the statistical distributions of circuit features are decided by hypothesis testing using these statistical data. After that, an adaptive method is used to define the estimated threshold for features, which would improve the model’s precision. At the end, the analysis of an example circuit is given to verify the effectiveness of this modeling method. The simulation' s time cost and model' s precision of different methods are compared in order to manifest the advantage of the new method.
An improved simulation modeling approach of multi-signal model for analogue circuit is proposed. The Monte-carlo method is used in circuit simulation to acquire statistical data of circuit features. An empirical formula is adopted to determine the times of Monte-Carlo simulation for reducing the simulation' s time-cost. Then, the statistical distributions of circuit features are decided by hypothesis testing using these statistical data. After that, an adaptive method is used to define the estimated threshold for features, which would improve the model’s precision. At the end, the analysis of an example circuit is given to verify the effectiveness of this modeling method. The simulation' s time cost and model' s precision of different methods are compared in order to manifest the advantage of the new method.
2011, 40(1): 64-68.
doi: 10.3969/j.issn.1001-0548.2011.01.012
Abstract:
Based on radiative transfer model, a radiometer observation simulation system is designed, which include the effects of topography, heterogeneity, and atmosphere. The difference between simulated and measured brightness temperatures is analyzed, a sensitivity study is also carried out to assess the relative contributes of the roughness. It is found that the roughness introduces an uncertainty of 38 k. The developed simulation system should be very useful in modeling and improving our understanding of radiant mechanism.
Based on radiative transfer model, a radiometer observation simulation system is designed, which include the effects of topography, heterogeneity, and atmosphere. The difference between simulated and measured brightness temperatures is analyzed, a sensitivity study is also carried out to assess the relative contributes of the roughness. It is found that the roughness introduces an uncertainty of 38 k. The developed simulation system should be very useful in modeling and improving our understanding of radiant mechanism.
2011, 40(1): 69-72.
doi: 10.3969/j.issn.1001-0548.2011.01.013
Abstract:
Aimed at output control of inductive power transfer systems, a novel energy injection adjusting strategy is presented. The output adjusting strategy is realized by switching between energy injection mode and free resonance mode. Furthermore, the energy equilibrium relationships of each switching mode in resonant network are presented. The output control is realized by duty cycle control of energy injection according to the relationships among energy injection, storage, and dissipation. The controller is designed and realized on the basis of energy analysis to avoid complex nonlinear system modeling. Finally, experiment results verify the energy injection adjusting strategy and the controller design method.
Aimed at output control of inductive power transfer systems, a novel energy injection adjusting strategy is presented. The output adjusting strategy is realized by switching between energy injection mode and free resonance mode. Furthermore, the energy equilibrium relationships of each switching mode in resonant network are presented. The output control is realized by duty cycle control of energy injection according to the relationships among energy injection, storage, and dissipation. The controller is designed and realized on the basis of energy analysis to avoid complex nonlinear system modeling. Finally, experiment results verify the energy injection adjusting strategy and the controller design method.
2011, 40(1): 73-79.
doi: 10.3969/j.issn.1001-0548.2011.01.014
Abstract:
An important problem of using evolutionary algorithm to discover community structure in complex networks is how to reduce the search space of network partitions for speeding up convergence. This paper presents an approach to similarity measurement between nodes and communities based on the local topology information of network nodes, and proposes a new particle swarm optimization algorithm to detect fuzzy communities of network. In the iterative process of algorithm the position vector of particle is modified according to similarity degrees between nodes and communities to promote search efficiency. Experiments on various scale computer-generated networks and real world networks show the capability and efficiency of the method to find the fuzzy community structure of network.
An important problem of using evolutionary algorithm to discover community structure in complex networks is how to reduce the search space of network partitions for speeding up convergence. This paper presents an approach to similarity measurement between nodes and communities based on the local topology information of network nodes, and proposes a new particle swarm optimization algorithm to detect fuzzy communities of network. In the iterative process of algorithm the position vector of particle is modified according to similarity degrees between nodes and communities to promote search efficiency. Experiments on various scale computer-generated networks and real world networks show the capability and efficiency of the method to find the fuzzy community structure of network.
2011, 40(1): 80-84.
doi: 10.3969/j.issn.1001-0548.2011.01.015
Abstract:
In this paper, we present a new time-space based trust model which integrates time and space factors, where short-term-trust, long-term-trust, abuse-trust and feedback-trust are the main consideration. Furthermore, the community-trust is proposed to solve the difficulty of binding between user and physical IP address. The community-trust indicates the integrative trust level of users' physical locations set in a specific range. Theoretical analysis and simulation results show that the time-space based trust model is effective on modeling dynamic trust relationship, aggregating feedback information, and resisting some attacks such as slander, peer collusion, and Sybil attacks.
In this paper, we present a new time-space based trust model which integrates time and space factors, where short-term-trust, long-term-trust, abuse-trust and feedback-trust are the main consideration. Furthermore, the community-trust is proposed to solve the difficulty of binding between user and physical IP address. The community-trust indicates the integrative trust level of users' physical locations set in a specific range. Theoretical analysis and simulation results show that the time-space based trust model is effective on modeling dynamic trust relationship, aggregating feedback information, and resisting some attacks such as slander, peer collusion, and Sybil attacks.
2011, 40(1): 85-89.
Abstract:
A novel distributed method based on peer level model is presented to inhibit DDoS attack. The level model collects four factors including the behaviors of the current peer and its network status to evaluate level value by uncertain inference. Forwarding rate is decided by level value. The data on each peer are sorted by linear classifier and then discarded according to level value. Simulation experiment indicates this method could inhibit DDoS attack and enhance resilience of P2P overlay network.
A novel distributed method based on peer level model is presented to inhibit DDoS attack. The level model collects four factors including the behaviors of the current peer and its network status to evaluate level value by uncertain inference. Forwarding rate is decided by level value. The data on each peer are sorted by linear classifier and then discarded according to level value. Simulation experiment indicates this method could inhibit DDoS attack and enhance resilience of P2P overlay network.
2011, 40(1): 90-94.
doi: 10.3969/j.issn.1001-0548.2011.01.017
Abstract:
This paper presents a method of removing copy on the basis of multiple message copies to control message copies reasonably. With the variety characteristic of copies at the encounter of nodes in the networks, we construct the discrete time Markov chain of message copies and set up birth and death model, and then verify and obtain stationary distribution. Simulation result shows that the linear relationship between encounter number and time is almost same with that given by the theory model. In comparison with epidemic routing, the message copies decrease obviously, and the success delivery efficiency of message reaches 90% and even reaches 100% if the delay time is increasing.
This paper presents a method of removing copy on the basis of multiple message copies to control message copies reasonably. With the variety characteristic of copies at the encounter of nodes in the networks, we construct the discrete time Markov chain of message copies and set up birth and death model, and then verify and obtain stationary distribution. Simulation result shows that the linear relationship between encounter number and time is almost same with that given by the theory model. In comparison with epidemic routing, the message copies decrease obviously, and the success delivery efficiency of message reaches 90% and even reaches 100% if the delay time is increasing.
2011, 40(1): 95-99.
doi: 10.3969/j.issn.1001-0548.2011.01.018
Abstract:
The shortages existed in the e-mail feature selection method is first analyzed. A new spam filtering feature selection model based on game theory is then proposed. The game theory is applied to feature selection of mail in order to reduce the scale of information and improve the efficiency of spam filtering. When designing the feature selection model, the impact acted by fuzzy membership of mail samples on feature selection is considered. The feature selection model’s handling capacity for practical problems is enhanced by using a blending sample measure of fuzzy membership function in the definition of feature points to mail category discrimination. The experiments performed on CDSCE Corpus show that the mail feature selection is better than other feature selection methods.
The shortages existed in the e-mail feature selection method is first analyzed. A new spam filtering feature selection model based on game theory is then proposed. The game theory is applied to feature selection of mail in order to reduce the scale of information and improve the efficiency of spam filtering. When designing the feature selection model, the impact acted by fuzzy membership of mail samples on feature selection is considered. The feature selection model’s handling capacity for practical problems is enhanced by using a blending sample measure of fuzzy membership function in the definition of feature points to mail category discrimination. The experiments performed on CDSCE Corpus show that the mail feature selection is better than other feature selection methods.
2011, 40(1): 100-104.
doi: 10.3969/j.issn.1001-0548.2011.01.019
Abstract:
An optimistic fair exchange protocol in distributed settings without a single trusted third party is proposed. In this protocol, the exchange process consists of secret share ciphertext exchange phase and key gradual exchange phase. Each party is able to stop releasing the rest secret shares in case of cheat, which can be detected with high probability during the process. A threshold decryption group is involved only when unfair behavior occurs in the last exchange round. The proposed protocol does not rely on equal computing power assumption or a trusted third party to guarantee fairness. It also has equivalent computation complexity and smaller communication complexity compared with previous gradual release schemes.
An optimistic fair exchange protocol in distributed settings without a single trusted third party is proposed. In this protocol, the exchange process consists of secret share ciphertext exchange phase and key gradual exchange phase. Each party is able to stop releasing the rest secret shares in case of cheat, which can be detected with high probability during the process. A threshold decryption group is involved only when unfair behavior occurs in the last exchange round. The proposed protocol does not rely on equal computing power assumption or a trusted third party to guarantee fairness. It also has equivalent computation complexity and smaller communication complexity compared with previous gradual release schemes.
2011, 40(1): 105-110.
doi: 10.3969/j.issn.1001-0548.2011.01.020
Abstract:
A learning-based super-resolution algorithm based on Kernel Partial Least Squares (KPLS) regression is proposed. First, KPLS regression algorithm is introduced. Then a super-resolution algorithm based on KPLS regression is analyzed. High resolution images use the high-frequency information as their feature, while low resolution images use middle-frequency as their features. Based on the relationship of the high and low resolution images, KPLS is used to set up regression model. The regression model is applied to infer high-resolution image. The experimental results show that our method can achieve very good results to face images and car plate images. The results of our method are closer to the real images.
A learning-based super-resolution algorithm based on Kernel Partial Least Squares (KPLS) regression is proposed. First, KPLS regression algorithm is introduced. Then a super-resolution algorithm based on KPLS regression is analyzed. High resolution images use the high-frequency information as their feature, while low resolution images use middle-frequency as their features. Based on the relationship of the high and low resolution images, KPLS is used to set up regression model. The regression model is applied to infer high-resolution image. The experimental results show that our method can achieve very good results to face images and car plate images. The results of our method are closer to the real images.
2011, 40(1): 111-115.
doi: 10.3969/j.issn.1001-0548.2011.01.021
Abstract:
At present, too much network bandwidth has been occupied by a variety of P2P applications that greatly affect the backbone of the network. A scheme for data flow optimization is presented to reduce the consumption of the network bandwidth. In the scheme, the peers relationship is divided into physical and logical neighbors, the pathfinder algorithm is used to determine the relationship between peers and achieve topology matching, and the notice/drawback mechanism is introduced in the data scheduling algorithms to make most data transmission under the control of metropolitan area network. The experiment results of a simulation environment prove that the scheme mentioned above is an effect way of reducing the network traffic by 90%.
At present, too much network bandwidth has been occupied by a variety of P2P applications that greatly affect the backbone of the network. A scheme for data flow optimization is presented to reduce the consumption of the network bandwidth. In the scheme, the peers relationship is divided into physical and logical neighbors, the pathfinder algorithm is used to determine the relationship between peers and achieve topology matching, and the notice/drawback mechanism is introduced in the data scheduling algorithms to make most data transmission under the control of metropolitan area network. The experiment results of a simulation environment prove that the scheme mentioned above is an effect way of reducing the network traffic by 90%.
2011, 40(1): 116-121.
doi: 10.3969/j.issn.1001-0548.2011.01.022
Abstract:
Gene Expression Programming is effective for function mining. In gene expression usually exist some un-expressed introns. To improve the expression efficiency, this paper makes following contributions: Proposed an evolutionary algorithm embedded gene expression programming (EGEP) based on a new decoding method of gene; Proposed some new concepts, i.e. the maximum expression tree, nested expression tree and spliced expression tree; Analyzed the expression space of gene and the complexity of algorithm. Extensive experiments show that the success rate is improved greatly and under the small size population, the ability of mining function surpasses GEP apparently. In single gene algorithms, when the objective functions are bivariate function and single-variable function, the ratios of the convergence generation of EGEP to that of GEP are 25.5% and 16.3% respectively; compared with GEP, the success rate of EGEP is averagly increased by 43% in bivariate function mining.
Gene Expression Programming is effective for function mining. In gene expression usually exist some un-expressed introns. To improve the expression efficiency, this paper makes following contributions: Proposed an evolutionary algorithm embedded gene expression programming (EGEP) based on a new decoding method of gene; Proposed some new concepts, i.e. the maximum expression tree, nested expression tree and spliced expression tree; Analyzed the expression space of gene and the complexity of algorithm. Extensive experiments show that the success rate is improved greatly and under the small size population, the ability of mining function surpasses GEP apparently. In single gene algorithms, when the objective functions are bivariate function and single-variable function, the ratios of the convergence generation of EGEP to that of GEP are 25.5% and 16.3% respectively; compared with GEP, the success rate of EGEP is averagly increased by 43% in bivariate function mining.
2011, 40(1): 122-127.
doi: 10.3969/j.issn.1001-0548.2011.01.023
Abstract:
For distributed real time embedded system (DRES), the paper proposes a development model QuOCCM which can provide adaptive quality assurance. QuOCCM is composed of Client, Qoskets, and Server, all of them are implemented by component. The Client interacts with applications, gets QoS requirements, and trigs adaptive mechanism. The Qoskets extends QoS guarantee framework QuO and realizes the system QoS adaptive adjuster using component technology. The Server provides implements of quality assurance. Research shows that the method not only guarantees the adaptive QoS requirement, but also reduces the system complexity by separation of functional path and QoS path.
For distributed real time embedded system (DRES), the paper proposes a development model QuOCCM which can provide adaptive quality assurance. QuOCCM is composed of Client, Qoskets, and Server, all of them are implemented by component. The Client interacts with applications, gets QoS requirements, and trigs adaptive mechanism. The Qoskets extends QoS guarantee framework QuO and realizes the system QoS adaptive adjuster using component technology. The Server provides implements of quality assurance. Research shows that the method not only guarantees the adaptive QoS requirement, but also reduces the system complexity by separation of functional path and QoS path.
2011, 40(1): 128-133.
doi: 10.3969/j.issn.1001-0548.2011.01.024
Abstract:
Only identities of the server and the user are authenticated in traditional smart cards based passwords authentication schemes, but whether the platform is trusted or not does not be verified, and this identity authentication cannot provide enough protection on personal information of users. A trusted mutual authentication scheme based on smart cards is proposed, in which hash functions are used to authenticate identities, and remote attestation is used to verify the platform. Analysis shows that our scheme can resist most of possible attacks and is therefore more secure and efficient for smart card applicatoins.
Only identities of the server and the user are authenticated in traditional smart cards based passwords authentication schemes, but whether the platform is trusted or not does not be verified, and this identity authentication cannot provide enough protection on personal information of users. A trusted mutual authentication scheme based on smart cards is proposed, in which hash functions are used to authenticate identities, and remote attestation is used to verify the platform. Analysis shows that our scheme can resist most of possible attacks and is therefore more secure and efficient for smart card applicatoins.
2011, 40(1): 134-137.
doi: 10.3969/j.issn.1001-0548.2011.01.025
Abstract:
Heteroepitaxial growth of 3C-SiC on n-Si substrates has been performed by low pressure chemical vapor deposition process. The effects of different carbonized conditions and growth conditions on 3C-SiC films are investigated by optical profilometry and X-ray diffraction. The mechanism to reduce the 3C-SiC/Si warfer strain is discussed. The results show that the curvature of the wafer is reduced when the crystalline quality is improved. This can be interpreted that the crystalline quality improvement increases the intrinsic mismatch strain εm and compensates for the thermoelastic strain εθ , leading to the reduction of the residual strain. The best process condition is: carbonized temperature 1 000 ℃, carbonized time 5min, growth temperature 1 200 ℃, and growth rate 4 μm/h. High quality SiC epilayer has been obtained under the above condition and the epilayer curvature of the epilayer is only 5 μm/45 mm. The full-width at half maximum of the SiC(111) peak is 0.15°. The surface roughness is 15.4 nm.
Heteroepitaxial growth of 3C-SiC on n-Si substrates has been performed by low pressure chemical vapor deposition process. The effects of different carbonized conditions and growth conditions on 3C-SiC films are investigated by optical profilometry and X-ray diffraction. The mechanism to reduce the 3C-SiC/Si warfer strain is discussed. The results show that the curvature of the wafer is reduced when the crystalline quality is improved. This can be interpreted that the crystalline quality improvement increases the intrinsic mismatch strain εm and compensates for the thermoelastic strain εθ , leading to the reduction of the residual strain. The best process condition is: carbonized temperature 1 000 ℃, carbonized time 5min, growth temperature 1 200 ℃, and growth rate 4 μm/h. High quality SiC epilayer has been obtained under the above condition and the epilayer curvature of the epilayer is only 5 μm/45 mm. The full-width at half maximum of the SiC(111) peak is 0.15°. The surface roughness is 15.4 nm.
2011, 40(1): 138-141.
doi: 10.3969/j.issn.1001-0548.2011.01.026
Abstract:
A new level shifter with high driving current and low static power consumption is proposed. It makes use of reverse characteristic of diode and positive feedback to generate reliable driving signal for NMOS and PMOS transistor respectively. The level shifter is fabricated in CSMC 0.5 μm CMOS technology .The measured result shows that the circuit has the function of level shift and high conversion speed.
A new level shifter with high driving current and low static power consumption is proposed. It makes use of reverse characteristic of diode and positive feedback to generate reliable driving signal for NMOS and PMOS transistor respectively. The level shifter is fabricated in CSMC 0.5 μm CMOS technology .The measured result shows that the circuit has the function of level shift and high conversion speed.
2011, 40(1): 142-146.
doi: 10.3969/j.issn.1001-0548.2011.01.027
Abstract:
In order to detect the position of QRS and T wave in a non-preprocessed ECG signal, a combination method of the empirical mode decomposition (EMD) and morphological algorithm is introduced in this paper. Firstly, a novel boundary processing method is proposed to decrease the boundary distortion of EMD by means of signal extending. Secondly, the improved EMD is used to decompose the ECG signal into stationary intrinsic mode functions (IMFs) and residual components. Next, the two IMFs of low frequencies are reconstructed after de-noising with threshold method, and then the reconstructed signal is supplied to orient QRS to morphological method. T wave is detected by residual components. This method has been validated by the data from the MIT-BIH database, and the result shows that the detection rate of QRS is up to 99%. Moreover, this method has higher accuracy and better real-time performance compared with the traditional methods.
In order to detect the position of QRS and T wave in a non-preprocessed ECG signal, a combination method of the empirical mode decomposition (EMD) and morphological algorithm is introduced in this paper. Firstly, a novel boundary processing method is proposed to decrease the boundary distortion of EMD by means of signal extending. Secondly, the improved EMD is used to decompose the ECG signal into stationary intrinsic mode functions (IMFs) and residual components. Next, the two IMFs of low frequencies are reconstructed after de-noising with threshold method, and then the reconstructed signal is supplied to orient QRS to morphological method. T wave is detected by residual components. This method has been validated by the data from the MIT-BIH database, and the result shows that the detection rate of QRS is up to 99%. Moreover, this method has higher accuracy and better real-time performance compared with the traditional methods.
2011, 40(1): 147-151.
doi: 10.3969/j.issn.1001-0548.2011.01.028
Abstract:
One of the most important causes of the atherosclerosis is reactive oxygen species (ROS) to induce lipide overoxidation, which leads to endothelial cells dysfunction and vascular smooth muscle cells (VSMCs) abnormal proliferation. Recent investigations demonstrated that polyphenon has prevention effect in the treatment of atherosclerosis or other vascular diseases. In this paper, endothelial cells (ECs), VSMCs and HL-60 cell (as the abnormal cell model) were used to investigate the effects of green tea catechins on the proliferation. Furthermore, the clearance ability of ROS by catechins was detected by hypoxanthine-xanthineoxidase chemiluminescence method. It was found that the green tea catechins could lightly increase ECs proliferation at the concentration of 1 μg/mL, and green tea catechins of 1 and 3 μg/mL have no obvious effect on VSMCs proliferation. However, green tea catechins of 10 μg/mL significantly inhibited VSMCs proliferation. For HL-60 cells, the cell viability was presented in a dose-dependent manner. Similarly, the ROS clearance ability was also related to the dosage of green tea catechins.
One of the most important causes of the atherosclerosis is reactive oxygen species (ROS) to induce lipide overoxidation, which leads to endothelial cells dysfunction and vascular smooth muscle cells (VSMCs) abnormal proliferation. Recent investigations demonstrated that polyphenon has prevention effect in the treatment of atherosclerosis or other vascular diseases. In this paper, endothelial cells (ECs), VSMCs and HL-60 cell (as the abnormal cell model) were used to investigate the effects of green tea catechins on the proliferation. Furthermore, the clearance ability of ROS by catechins was detected by hypoxanthine-xanthineoxidase chemiluminescence method. It was found that the green tea catechins could lightly increase ECs proliferation at the concentration of 1 μg/mL, and green tea catechins of 1 and 3 μg/mL have no obvious effect on VSMCs proliferation. However, green tea catechins of 10 μg/mL significantly inhibited VSMCs proliferation. For HL-60 cells, the cell viability was presented in a dose-dependent manner. Similarly, the ROS clearance ability was also related to the dosage of green tea catechins.
2011, 40(1): 152-156.
doi: 10.3969/j.issn.1001-0548.2011.01.029
Abstract:
Piezoelectricity intelligent beam structures with interval parameters are researched in this paper. The finite element modeling of structural close loop systems for dynamic characteristic analysis is established by displacement feedback control method. The uncertainty of mass matrix and stiffness matrix are analyzed when the structural physical parameters and geometric dimensions etc are interval variables. The computation expression of structural natural frequency of close loop systems is obtained by employing Rayleigh quotient and interval algorithms. The effects of the uncertainty of structural interval parameters on structural natural frequency are inspected. In addition, the modeling and the method presented in this paper are verified through an example. The results illustrate that the interval coefficient method has important engineering value in dealing with the dynamic characteristic of close loop systems of the intelligent beam.
Piezoelectricity intelligent beam structures with interval parameters are researched in this paper. The finite element modeling of structural close loop systems for dynamic characteristic analysis is established by displacement feedback control method. The uncertainty of mass matrix and stiffness matrix are analyzed when the structural physical parameters and geometric dimensions etc are interval variables. The computation expression of structural natural frequency of close loop systems is obtained by employing Rayleigh quotient and interval algorithms. The effects of the uncertainty of structural interval parameters on structural natural frequency are inspected. In addition, the modeling and the method presented in this paper are verified through an example. The results illustrate that the interval coefficient method has important engineering value in dealing with the dynamic characteristic of close loop systems of the intelligent beam.
2011, 40(1): 157-160.
doi: 10.3969/j.issn.1001-0548.2011.01.030
Abstract:
In structural design, the reliability-based design optimization (RBDO) has received increasing attention for high reliability and safety. This paper proposes two single-loop methods based on reliability index approach and performance measure approach for RBDO. These two methods directly transform the original RBDO problem into a deterministic optimization. Two well-known RBDO problems are utilized to demonstrate efficiencies of the proposed methods.
In structural design, the reliability-based design optimization (RBDO) has received increasing attention for high reliability and safety. This paper proposes two single-loop methods based on reliability index approach and performance measure approach for RBDO. These two methods directly transform the original RBDO problem into a deterministic optimization. Two well-known RBDO problems are utilized to demonstrate efficiencies of the proposed methods.
2011, 40(1): 20-25.
doi: 10.3969/j.issn.1001-0548.2011.01.004
Abstract:
对多用户直接序列码分多址(DS-CDMA)系统的多用户检测进行了研究,提出了一种基于Khatri-Rao积分解和连续干扰抵消的KRPSIC盲多用户检测算法。该算法充分利用DS-CDMA系统接收信号所具有的Khatri-Rao积结构性质,在用户扩频码、信道衰落系数均未知的情况下实现多用户检测,证明了算法的可辨识性和单调收敛性。仿真结果表明KRPSIC算法的误码率性能接近非盲的迫零算法,其作为一种非浅性迭代算法,在多用户检测的过程中对迭代初值的选取不敏感,当使用随机值初始化时算法依然有效。
对多用户直接序列码分多址(DS-CDMA)系统的多用户检测进行了研究,提出了一种基于Khatri-Rao积分解和连续干扰抵消的KRPSIC盲多用户检测算法。该算法充分利用DS-CDMA系统接收信号所具有的Khatri-Rao积结构性质,在用户扩频码、信道衰落系数均未知的情况下实现多用户检测,证明了算法的可辨识性和单调收敛性。仿真结果表明KRPSIC算法的误码率性能接近非盲的迫零算法,其作为一种非浅性迭代算法,在多用户检测的过程中对迭代初值的选取不敏感,当使用随机值初始化时算法依然有效。