Reviews & Opinions
Independent and trusted. Read before buy Boss RDS-3125!

Boss RDS-3125

Manual

Preview of first few manual pages (at low quality). Check before download. Click to enlarge.
Manual - 1 page  Manual - 2 page  Manual - 3 page 

Download (English)
Boss RDS-3125, size: 336 KB
Instruction: After click Download and complete offer, you will get access to list of direct links to websites where you can download this manual.

About

About Boss RDS-3125
Here you can find all about Boss RDS-3125 like manual and other informations. For example: review.

Boss RDS-3125 manual (user guide) is ready to download for free.

On the bottom of page users can write a review. If you own a Boss RDS-3125 please write about it to help other people.

 

[ Report abuse or wrong photo | Share your Boss RDS-3125 photo ]

User reviews and opinions

<== Click here to post a new opinion, comment, review, etc.

Comments to date: 3. Page 1 of 1. Average Rating:
kenexcelon 4:55pm on Thursday, September 2nd, 2010 
Should Have Spent the Extra $100-150!! The 745BA is a good bang for your buck in terms of BASIC features.
mikenzo 4:04pm on Monday, June 7th, 2010 
easy to connect, sucks that you have to buy a seperate cable for Ipod. It lights up nice, it has multiple colors, has so many buttons.
Superfuchs 6:08pm on Monday, April 19th, 2010 
Horrible Purchase This product is horrible, it does not work properly. Definitely not worth the money. DO NOT BUY

Comments posted on www.ps2netdrivers.net are solely the views and opinions of the people posting them and do not necessarily reflect the views or opinions of us.

 

Documents

doc0

With the advent of network-centric warfare, the computing resources of the Navy and Marine Corps have become prime targets of attack by adversaries of the United States. In some cases, attacks on a distributed computing system or its underlying network infrastructure necessitate that the attacker have some information about the host network operating system being used on the targeted node in the system. This information, to some extent, can be gained by a network scan. At present, systems defenses are designed to block intrusions and limit access to stop potential intruders from exploiting known vulnerabilities of a network operating system. Unfortunately, the most dangerous intruder is an intruder who is exploiting a vulnerability unknown to those attempting to maintain and protect system integrity. This intruder can exploit an unknown vulnerability over and over again until his activity within the system is detected or the vulnerability is revealed, for example, to the general hacker community. When a system administrator becomes aware that system defenses have been compromised or are vulnerable, the administrator must decide to either shut down the service or attempt to shore up the defenses. When a security hole is patched, the potential intruder may often be able to detect that the patch was made and move on to another method of attack based on a list of actual or potential system vulnerabilities. Before a potential intruder can exploit a vulnerability of a system, he must first gather information about the system. Every information system has a presence on the network called a fingerprint. A fingerprint is marked by such things as the systems Internet Protocol (IP) address, the ports and services available, and the operating system. The fingerprint is evident in the network in many ways,
Linux is a registered trademark of Linus Torwalds. Windows NT is a registered trademark of Microsoft Corporation.
such as messages sent in services, availability of connection ports, and content of messages. An intruder can scan a system to obtain its fingerprint and identify the operating system (OS) and services running, and then the intruder can try to match the OS and services to vulnerabilities and exploits. One major weakness in systems today is that nodes in a distributed system can reveal too much information about themselves. Our systems faithfully reply to requests from remote nodes and volunteer reliable information as to what services are available and even what operating system and applications are being used. These accurate details are just what a potential intruder is seeking. System vulnerabilities are based on a weakness in the protocol, application, or operating system, so without accurate knowledge of the application and operating system, an intruder cannot determine if the system is vulnerable. He can only guess. A firewall may limit access to some ports, protocols, and even some types of scans. As long as a network connection of any kind is possible, sufficient details can be gathered from the target computer to reveal the OS type (i.e., NT, Solaris, Linux) and even its version (i.e., 98, NT service pack 4, kernel version 2.2.16-18). This research examined ways of obfuscating information that can be gathered through network scans, by changing relevant system settings and modifying the OS kernel of an open-source operating system (Linux). The focus was to achieve a dynamic configurability that would allow the scanned host to change the information detectable to match that of a different operating system (Win NT). We were able to modify the OS fingerprint so it could not be definitively identified over the network. These changes removed or modified various fields and parameters of the Transmission Control Protocol/Internet Protocol (TCP/IP) packets transmitted in nondisruptive ways. This research found that the current tools used for OS identification though network scanning are very advanced. The tools are designed to use collaborative efforts to gather as many different fingerprints as possible and allow for fingerprints to be retrieved from configuration files. Specifically, the Network Data Management Protocol (NMAP) tool had over 400 different OS fingerprints listed. Each fingerprint included a specific response description to as many as seven different packet scans. These scans attempted to exercise the OS TCP/IP stack by sending unusual, invalid, or nonsense packets to the target machine. Specific attention was given to the variable and optional field settings of the TCP/IP packet. Specifically, TCP options such as Time Stamp, Minimum Segment Size, Window Scaling, and the field Window Size revealed the most differences. This research separated the different fingerprints into groups based on the types of differences between them. This method allowed quick identification of possible targets for which operating systems could be changed to look like each other. Differences in fingerprints were categorized as either major or minor. Major differences included responding or not responding to a scan packet entirely, or changes to the sequence number generation algorithm. Minor changes were defined by changes in the fields of the response packet and options excluding major differences. Surprisingly, there were very few major changes between the OS in the NMAP program configuration file. When considering the more common operating systems, it was found that most systems differed by not more than three major changes and six minor changes. There were 21 systems that had zero major differences and three systems with zero minor differences. The first and most obvious minor difference was the TCP Options field of the TCP packet. Each OS implemented a different set of options, so these could easily reveal the identity of the OS being

These scenarios were then tested with real software on real hardware in a test environment. Measurements of all possible deployments were collected and compared. The results showed that substantial performance enhancements could be achieved. The advances of object-oriented technology in the past decade have lead to worldwide acceptance of its principles. Today, numerous developers design their systems by modeling the problem domain in terms of communicating entities called objects. Object-oriented systems tend to be more intuitive, be easier to maintain, and also allow for more re-usable code. The future of computing is heading for a universe of distributed object servers. The evolution of object servers to distributed object servers will parallel the evolution of the relational databases. Over time, object servers will provide functionality to more client applications than their original applications, just as relational databases were used by more applications than the original application. In both cases, systems optimized for the original application may not perform well for the new applications. Tools that allow a programmer to model an object and create object servers with all the necessary infrastructure code needed to work as a distributed object server will soon be available. Such progress will lead to an explosion in the number of object servers available to client applications. A users network of computers will be in a constantly changing state. Object servers, applications, hardware, and user preferences will be in a constant state of flux. No static deployment strategy can adequately take advantage of the assets accessible on the network in such a frequently changing environment. In many cases, there exist predictable points in time where users will know how their network of computers will change. These predictable points in time are usually scheduled. By allowing users to take advantage of these scheduled changes, the system can be better utilized. No system can accurately predict user interaction with a system. Two separate users performing the same job will interact with a system differently. The same user may interact differently while performing the same job. For these reason-and-combinatorial-explosion problems, a more dynamic software-engineering approach must be taken instead of a static computer-science approach. The alternative is a deployment strategy that is dictated by the system engineers view of how the system will be used. Of course, the system engineer doesnt revisit this strategy every time hardware, software, or user interactions change. The goal is to make better deployment choices without the need for a system engineer, since many of these changes will take place without the knowledge of a system engineer or the budget to employ one. The results of the Java Remote Method Invocation (RMI) experiments lead to some interesting results. The predictions made by the model were very accurate, leading to good choices for server deployment. However, more striking conclusions are drawn from looking at groups of experiments. Although the model does a good job of predicting performance for a single point, the true strength of this approach is chaining these points together. By taking advantage of changes to the system at predictable points in time, we can do better than any single statically assigned server placement. If we assume that we have a shift schedule that has the following six unique manning requirements over the duration of the schedule, then we can initiate object-server redeployments to coincide with the shift changes. The shaded areas in Table 1 indicate the deployment pattern recommended by the model. The numbers in the matrix are the actual measured values for these deployments.

Table 1. Shift changes. PAT SERV A GIGA GIGA GIGA BR733 SIX SIX SERV B GIGA BR733 BR733 GIGA GIGA BR733 SERV C BR733 GIGA BR733 GIGA BR733 GIGA ROLE 1 899.34 960.81 1079.64 1140.80 1355.59 1306.69 ROLE 2 5530.33 6417.17 6686.38 5953.02 6752.50 7380.83 ROLE 3 R2 (4) R3 (3) R1 (28) 4964.73 4333.77 3789.35 7005.97
8266.52 11746.10 13925.95 7802.17 11711.42 13066.21 9124.94 14333.22 20415.47 7413.34 11335.30 14614.58
8625.44 10544.22 13839.30 11117.11 8259.05 12569.52 12488.02 12042.34
We are only interested in the six deployment patterns listed in Table 1. If we were to institute a static deployment for our system, then we would be forced to pick just one of the deployment patterns listed above. The system engineer would be forced into some logic that mitigated a worst-case scenario. However, since we have the ability to reason about different manning schedules, we can take advantage of this capability. By allowing the system to adjust the location of its object servers at shift changes, we gain substantial improvements to the system. By comparing the models recommended deployment pattern versus the other six deployment patterns in Table 1, we can quantify this improvement. By dividing the measured performance of the model-predicted patterns by the measured performance of the other patterns in the same column, we get the performance improvement for each shift. Table 2 contains these values.
Table 2. Shift improvements. PAT SERV A GIGA GIGA GIGA BR733 SIX SIX SERV B GIGA BR733 BR733 GIGA GIGA BR733 SERV C BR733 GIGA BR733 GIGA BR733 GIGA ROLE 1 7% 0% 11% 16% 29% 26% ROLE 2 0% 14% 17% 7% 18% 25% ROLE 3 10% 5% 18% 0% 14% 10% R2 (4) 10% 10% 26% 7% 0% 16% R3 (3) 10% 4% 39% 15% 10% 0% R1 (28) 24% 13% 0% 46% 66% 68%
Interesting to note is that we are only comparing deployment patterns that are of high probability of actually being used. Only one entry in the table has a negative value; all other entries have a substantial performance improvement. Clearly then, Table 2 illustrates that any organization with known manning schedules that fluctuate would benefit from this approach.
REFERENCES 1. Ray, W., V. Berzins, and Luqi. 2000. Adaptive Distributed Object Architectures, Proceedings of the Armed Forces Communications and Electronics Association (AFCEA) Federal Database Colloquium, pp. 313330. U.S. 2. Ray, W. 2000. Optimization of Distributed, Object-Oriented Systems, ACM OOPSLA 2000 Proceedings, Association for Computing Machinery: Object-Oriented Programming, Systems, Languages, and Applications. 3. Ray, W. and A. Farrar. 2001. Object Model Driven Code Generation for the Enterprise, Proceedings of the Institute of Electrical and Electronic Engineers: Rapid System Prototyping (IEEE RSP) 2001.
Knowledge Mining for Command and Control Systems
Dr. Stuart H. Rubin Code 27305, (619) 5533554

Another significant result in FY 01 was a proof that for the low values of signal-to-noise ratio (SNR) used in LPD applications, there is negligible difference in performance between a variety of methods that can be used to generate the modulation sequence. Comparisons of maximal length sequences (MLS), modified maximal length sequences (MMLS), and random sequences (RS) reveal that although MLS gives significantly improved performance at high SNR due to the improved peak-tosidelobe ratio of the autocorrelation function, the results are insignificant at low SNR. This result is significant for LPD applications because the set of maximal length sequences is relatively small and the transmitter can potentially be intercepted by an exhaustive search over the limited number of MLS values. It was further shown that performance could be improved in some cases by using Gray codes to minimize the number of bit errors caused by timing or synchronization errors. However, there will be significant differences in performance at the higher SNRs needed for high-data-rate communications, and the comparative performance of various types of sequences is currently being evaluated. To quantify the vulnerability of a given waveform, it is necessary to define a performance metric that is standard for the range of waveforms considered. The baseline metric selected is the radiometer, and it was shown that CCSK and M-ary Frequency Shift Keying (MFSK) have equivalent vulnerability to a radiometer detector. It was further shown that MFSK is considerably more vulnerable to detection by a channelized receiver. The channelized receiver consists of a bank of bandpass filters with each filter followed by an energy detector. Another major result in this effort was the development of nonlinear adaptive signal-processing techniques to mitigate hostile interference with minimal distortion of the communications signals. It has been proven that the nonlinear effects can be used to significantly improve performance. This work is being done in conjunction with the University of California, San Diego (UCSD) and Professor Beex, a Professor from Virginia Tech and a Senior Research Associate of the National Research Council who is currently working at SSC San Diego. The use of both space and time diversity is important both to high-data-rate and LPD systems. The publications on the Intersil Web site discuss the importance of antenna diversity to mitigate intersymbol interference (ISI) in high-data-rate modems. For LPD systems, the antenna diversity is important both for ISI and for minimizing the energy transmitted to hostile interceptors. This topic is being investigated in conjunction with UCSD and Professor Beex. Methods to adapt the weights of the antenna to maximize the power delivered to the receiver have been developed. Work was initiated on the modification of CCSK to provide higher data rates and robustness in multipath. These issues are critical in the comparison with other modulation techniques under consideration for commercial standards. Multipath sensitivity is the key issue raised by Intersil relative to incorporation of CCSK into the commercial standards. In addition, comparisons to the commercial CDMA standards for cellular telephones and OFDM were also initiated. One of the properties of CCSK is that it is not efficient in its use of bandwidth since it requires that the bandwidth be doubled in order to increase the number of bits-per-symbol by one. This property is desirable for LPD because it spreads the energy over frequency and maintains the same detectability margin. However, this characteristic is undesirable for commercial applications because bandwidth is expensive. CDMA also requires more bandwidth to provide increased data rates. The goal of our CCSK/CDMA

ROUGH-SURFACE SCATTERING AND SEA-SURFACE MOTION
Scattering from rough boundaries produces losses in signal energy. These losses are two-fold. First, scatter converts energy to higher angles, eventually allowing the signal energy to penetrate the bottom where it is absorbed. Second, scatter destroys the coherence of the wave, thereby producing what might be termed an apparent loss. For instance, a moving surface will stretch and compress a sinewave reflected from it. If the reflected energy is detected by a matched-filter expecting a perfect sinewave, the matched filter will see a reduced power level. This discussion applies, for instance, to a single tone in an M-ary Frequency-Shift Keying (MFSK) signaling scheme, where a filter bank detects the tone. If we have a rough bottom with a static geometry, this loss of coherence does not occur. However, if the source or receiver moves, we have a dynamic situation similar to the surface loss just described. In round numbers, a typical communications carrier gives a wavelength of approximately 10 cm. A classical measure of the role of roughnessthe Rayleigh roughness parameteris the ratio of the roughness to the wavelength (or more precisely, the vertical component of the wavelength). As this number becomes close to unity, losses per bounce become large, perhaps 10 dB, and many of the standard scatter models that assume small roughness fail. The point of this discussion is that 10-cm roughness is easily attained on both surface and bottom boundaries in real environments, implying large boundary losses. Furthermore, the roughness is typically not known to within 10 cm, implying large uncertainty in those same losses and in the resulting transmission loss. Finally, the actual scatter mechanisms are complicated. In some cases, the airwater interface is the scatterer. In other cases, the bubbles below are likely to be dominant. Similarly, at the ocean bottom, scatter can occur at the interface or by inhomogeneities just below the interface (though not too far below since volume attenuation limits the sediment penetration significantly).
As a first step toward modeling scattering effects, we assumed that the boundary roughness dominates the problem, and we concentrated first on the bottom roughness. A common approach [8] to characterize this roughness is to use the spatial power spectral density, i.e., the power spectrum of the bottom roughness. Various forms may be used; however, one popular choice is ( k ) k b , where b is a measured parameter for the particular site. Suggested values for b are given in [8] along with the root mean square (RMS) roughness that defines the overall amplitude of the spectrum. Given the spatial power spectral density, individual realizations of the bottom can be constructed using a standard technique. In particular, the power spectrum is converted to an amplitude spectrum by taking its square root. The amplitude is then discretely sampled, and a random phase is introduced. Finally, a Fast Fourier Transform (FFT) is performed, and the mean depth is added to obtain a single realization of the bottom. In equations therefore,

All of the above reasoning issues may be treated via a Bayesian second-order probability approach called Complexity-Reducing Algorithm for Near-Optimal Fusion (CRANOF) that explicitly takes into account the problem of reducing the complexity of computations. In brief, the general CRANOF algorithm addresses problems of reasoning involving uncertain or incompletely specified probabilities. That is, there is a given set of either known or lower-bounded probabilities and a set of designated events of interest whose probabilities are not uniquely determined, but are desired (as in the chaining problem). When the desired probabilities are uniquely determined, CRANOF will find their values; but when they are not uniquely determined, CRANOF finds their most central value. In general, CRANOF can be applied to a wider variety of probability problems than exact-knowledge techniques, which, as in the case of Bayes nets (see Pearl, 1988), typically make a large number of independence assumptions in order to guarantee uniqueness. Consequently, CRANOF is able to deal with problems involving not only uncertain information, but also underdetermined information, which frequently occurs in sensor-fusion problems. Yet, as discussed above, there remain the overriding issues of rule reduction, modification, and selection, all of which are critical to achieving computationally tractable sensor fusion. The structure of the CRANOF algorithm is based on the synthesis of three previous major achievements concerning rule selection and reduction (Bamber & Goodman, 2000): Achievement 1. Under a (somewhat restrictive) consistency assumption, any finite set of inference rules whose associated conditional probabilities are all reasonably high may be reduced to a nearequivalent single rule. This assumption means that the single rule (for reasonably high validity thresholds) asymptotically yields essentially the same CRANOF estimators of conclusion validity as if the entire set of inference rules were used. While this rule is more complex in form than each of the original rules, its total complexityand that associated with its subsequent use in the conclusion validity estimation phaseis significantly less than if the original set of rules had been used. In CRANOF, this extreme reduction has actually been shown (as of the end of FY 01) to be modifiable by replacing the original rule set by a relatively small set of rules, not necessarily a single rulesee new results for FY 01 below. Achievement 2. A simple substitution procedure can be used, analogous to that employed by the standard maximum entropy approach, but unlike the latter (see, e.g., Rdder & Meyer [1996] and Rdder [2000] for an exposition of both an efficient algorithm for computing recursively maximum entropy estimators and developing an associated logic), a relatively simple closed-form expression can be obtained. (During FY 01, Achievement 2 has also been significantly extendedsee next section on FY 01 results and also Goodman, Bamber, and Nguyen, to be submitted.) Achievement 3. Via a fundamental (but often overlooked) theorem of regression theory (Rao, 1973), the direct use of training information extracted from a given database provides an alternative to the use of a rule base extracted from the same database. In future work, CRANOF will also extend Achievements 2 and 3 so that with Achievement 1, a viable rule selection and reduction procedure will exist for large classes of rule-based systems, including those pertaining to all levels of sensor fusion.

Also, the actual implementation of CRANOF consists of two aspects: (1) rule reduction, modification, and selection; and (2) application of the near-optimal reasoning system to data association and sensor fusion for tracking. Aspect (1) is a direct extension and integration of the three major achievements mentioned previously. Aspect (2) produces outputs that can be considered optimal or nearoptimal estimates of candidate conclusion validities, with inputs being probabilistic or linguistic, conditional or unconditional (or factual) in nature. This method allows for potentially including a full decision-theoretic (i.e., utility/cost function) structure based on incompletely specified probabilities, as well as linguistic and causal information. More generally, CRANOF is capable of operating as a general decision support system relevant to sensor fusion operating on both uncertain and underspecified information. Finally, the reasoning aspect of CRANOF can be directly applied to correlation/ association and fusion of geodynamical tracking information with attribute information.

FY 01 RESULTS

We documented our FY 01 work in the following papers: Bamber & Goodman (2001c); Bamber, Goodman, & Nguyen (2001); Bamber, Goodman, & Nguyen (submitted for publication); Bamber, Goodman, Torrez, & Nguyen (2001); Goodman (2001); Goodman & Bamber (to be submitted); Goodman, Bamber, & Nguyen (to be submitted); Goodman & Kreinovich (2001); Goodman, Trejo, Kreinovich, Martinez, & Gonzalez (2001); Torrez, Bamber, & Goodman (2001); Torrez, Goodman, Bamber, & Nguyen (2001). In addition, we gave the following unpublished oral presentations: Bamber & Goodman (2001a, 2001b); Goodman & Bamber (2001). Our FY 01 results can be categorized as solutions together with applications to two types of problems: (a) exact threshold problems where specified values of the probability thresholds, which may be anywhere in the zero-one interval, are given and (b) near-unity threshold problems where all that is known about the relevant rule probabilities is that they are close to one. The new results obtained during FY 01 include the following items: (1) We obtained an unexpected major result concerning exact threshold problems in that a new method was developed of trading off optimality vs. complexity by using results from the near-one threshold problem as a guide. A key aspect of this mathematically rigorous result is that the unnecessarily restrictive consistency assumption mentioned in Achievement (1) can be dropped and the reduction of rules can be to a small set of relatively noncomplex inference rules. Using a Bayesian prior Dirichlet distribution for the second-order probabilities, a complete closed-form expression can be obtained for both the exact and near-one threshold behavior of the entire system. (2) Another major result was the solution of the near-unity threshold problem under the weakest possible assumptions, namely, when no assumptions are made concerning the relative magnitudes (across rules) of the various rule thresholds distances from one. Specifically, it was shown that there are various equivalent methods of testing whether a conclusion is inferable from a collection of rules that are employed as premises. One such method involves checking whether a particular directed graph (that represents the premise rules) has a certain property (that represents the conclusion). Another such method involves showing, first, that there exists an argument that supports the conclusion and, second, that every counterargument that supports the negation of the conclusion is overridden by some argument that supports the conclusion itself. Applications of our results were also made to the problems of track fusion and detecting cyber intrusions in computer networks, Bamber, Goodman, Torrez, & Nguyen (2001); Torrez, Bamber, &

Most work involving compositional data has assumed that the observations yi R n are well represented by the linear mixture model:

y i = + a ki k

k =1 d
such that constraints c.1) 0 a ki and c.2)
abundance of class k in observation y i , and ~ N ( 0 , 0 ) is an additive noise term with normal probability distribution function (pdf) of mean 0 and covariance 0. In this model, variability of the observations arises from variability of the abundance values and additive noise. Observations may also exhibit intraclass variability. If each observation arises from one of d normal classes, then the data have a normal mixture pdf:
p( y ) = k N ( k , k )( y ), k 0, k = 1,

k =1 k =1 d d

where, d is the number of classes, k R n is the signature or endmember of class k, a ki is the
where k is the probability of class k. For many applications, data may consist of observations from pixels that are composed of multiple materials such that the observations from a given material have random variation. For such data, neither the linear mixture model nor the normal mixture model is adequate, and better classification and detection results may accrue from using more accurate methods.
Stocker and Schaum [1] propose a stochastic mixture model in which each fundamental class is identified with a normally distributed random variable, and observations y i are modeled as a composition:
y i = a ki k such that k ~ N ( k , k ) , a ki 0, and
To estimate parameters of the model, the allowed abundance values are quantized, e.g., a ki {0, 0.1, , 0.9, 1}, and each combination of quantized abundance values that satisfies the constraints is associated with a normally distributed class having mean and covariance as the corresponding linear combination of the fundamental mean vectors and covarince matrices. The data are then fit to the normal mixture model consisting of these classes. Stocker and Schaum demonstrate improved classification and detection using this model with three fundamental classes [1]. This approach is limited to a small number of fundamental classes as the number of mixture classes grows very rapidly with the number of fundamental classes. Furthermore, the quantization of the abundance values limits the accuracy of the class parameter and abundance estimates.

, k [2]. The class parameters are updated (UP) using the expectation-maximization equations derived in [2] and the j current abundance estimates a ki. Likelihood increases with each iteration of UA or UP. Thus, a sequence of parameter estimates of increasing likelihood is obtained by the application of a sequence of updates: UA, UP, UA, UP. The iteration is halted when a convergence criterion is satisfied.
constraints c.1 and c.2.a or c.2.b using current class parameters k
3. Detection Algorithms Both anomaly and likelihood ratio detection algorithms may be derived from the stochastic compositional model. If the target signature is unknown, then an anomaly detector is obtained by estimating the parameters of the data as described above and computing the log-likelihood of the observation, yi, given the parameters:
1 n 1 AS ( y i ) = L( y | H 0 ) = log( ( i ) ) log(2 ) y i i i 2

( )) ( ) (y

where H0 (H1) indicates that the target parameters are not (are) included in the abundance estimate. An anomaly detection procedure is obtained by comparing (5) to a threshold. A likelihood ratio test may be derived from the SCM if a target signature is available. The log-likelihood ratio is then
LS ( y ) = L( y | H 1 ) L( y | H 0 ),

4. Detection Experiment

Detection algorithms derived from the linear unmixing model, a multi-variate Gaussian mixture model, and the stochastic compositional model were applied to the problem of detecting a personal flotation device (PFD) on the ocean surface. Ocean hyperspectral imagery (HSI), taken over case I water from the Central Pacific, and PFD radiance signatures coincident with the imagery data were available from an Office of Naval Research hyperspectral imaging program. The background scene consisted of a 125X125 image of 24 band data covering the portion of the spectrum from 415 to 830 nm. Test data for the detection experiment were obtained by combining the PFD radiance signature with the HSI at pixel fill fractions (PFF) of 5%. The receiver operating characteristic (ROC) curves of likelihood ratio and anomaly detection statistics applied to these data are shown in Figures 1a and 1b, respectively. Clearly, algorithms based on the stochastic compositional model have a greatly reduced number of false alarms.

canine distemper virus (CDV) (a morbillivirus closely related to one infecting marine mammals) by intramuscular or intradermal inoculation with a plasmid encoding the CDV hemagglutinin protein [3]. To gain some determination of confidence in a transfected gene product, one must rely on measures of immune-system response to exogenously administered plasmids. Much of the effort in the early phases of any vaccine development for marine mammals must be focused on response assays. Once assays are developed and validated, one may confidently evaluate any number of potential vaccines or immunomodulating plasmids for use. This work is currently underway in our laboratory.
The major objective of this research effort is to develop knowledge, methodologies, and reagents required to apply nucleic-acid transfection technology to Navy working marine mammals. The products will lead to the development and application of specific DNA vaccines and immuno-modulating plasmids for the protection of Navy marine mammals. Such results will substantially reduce morbidity and mortality for marine mammals and will enhance force protection.
The approach focused on four major areas: (1) database searches, (2) construction of DNA plasmids, (3) routes of transfection, and (4) assessment of immune response. 1. Database Searches. The first step was identification of the major causes of known and potential morbidity and mortality within the Navy animal population. This research was done by reviewing scientific literature and Navy Marine Mammal Program (NMMP) medical record archives and was facilitated by our recently completed comprehensive marine-mammal database. The infectious agents identified were continuously rank-ordered by incidence, and candidates for plasmid constructs have been identified. 2. Construction of DNA Plasmids. DNA plasmids of choice were constructed following routine cloning techniques in molecular biology. The expertise of Tracy Romano, Ph.D., from Texas A&M University; Peter Hobart, Ph.D., from Vical Inc.; and Branson Ritchie, Ph.D., the University of Georgia ensured training of the postdoctoral candidate, laboratory technicians, and graduate students in the field of molecular biology techniques needed to carry out this task. 3. Routes of Transfection. For most DNA vaccinations, the plasmid is introduced into either skeletal muscle or skin. An effective immune response requires antigen processing and presentation by so-called antigen presenting cells. These cells include tissue macrophages, dendritic cells, and Langerhans cells. Immune responses to plasmid vaccines have also been demonstrated with intravenous, intranasal, and oral administration. Our work to date has involved administration of DNA vaccines intramuscularly in two marine-mammal sites: (1) the cervical region of longissimus muscle and (2) the thoracolumbar region of the longissimus muscle. Ultrasound guidance was used to confirm injection of the vaccine into these muscle bodies. 4. Assessment of Immune Response. Administered vaccines were evaluated as to their efficacy based on the immune response. Assays were developed and adapted to look at humoral and cellular immune function in marine mammals. Moreover, molecular tools and reagents were developed in our laboratory to help assess immune function and to aid in assessing vaccine efficacy.

PROCEEDINGS

Bamber, D. and I. R. Goodman. 2001. Reasoning with Assertions of High Conditional Probability: Entailment with Universal Near Surety, Proceedings of the Second International Symposium on Imprecise Probabilities and Their Applications (ISIPTA), pp. 1726. Bamber, D., I. R. Goodman, W. C. Torrez, and H. T. Nguyen. 2001. Complexity Reducing Algorithm for Near Optimal Fusion (CRANOF) with Application to Tracking and Information Fusion, Proceedings of SPIE AeroSense Conference 4380: Signal Processing, Sensor Fusion, and Target Recognition X, pp. 269280. Banister, B. C. and J. R. Zeidler. 2001. A Stochastic Gradient Algorithm for Transmit Antenna Weight Adaptation with Feedback, Proceedings of the 3rd IEEE Workshop on Signal Processing Advances in Wireless Communications (SPAWC), http://icas.ucsd.edu/icas/faculty/zeidler/students/bBanister/TxAA_SPAWC2001.pdf Banister, B. C. and J. R. Zeidler. 2001. Implementation of Transmit Antenna Weight Adaptation through Stochastic Gradient Feedback, Proceedings of the 35th Asilomar Conference on Signals Systems and Computers, http://icas.ucsd.edu/icas/faculty/zeidler/students/bBanister/txaa_asilomar2001.pdf Banister, B.C. and J. R. Zeidler. 2001. Tracking Performance of a Stochastic Gradient Algorithm for Transmit Antenna Weight Adaptation with Feedback, Proceedings of the IEEE Acoustics, Speech and Signal Processing Conference, http://icas.ucsd.edu/icas/faculty/zeidler/students/bBanister/TxAA_ICASSP2001.pdf Banister, B. C. and J. R. Zeidler. 2001. Transmission Subspace Tracking for Multiple-Input Multiple-Output (MIMO) Communications Systems, Proceedings of the IEEE Global Communications Conference (GlobeComm), http://icas.ucsd.edu/icas/faculty/zeidler/students/bBanister/TxAA_MIMO_Globecom2001.pdf Baxley, P. A., H. P. Bucker, V. K. McDonald, and M. B. Porter. 2001. Three-Dimensional Gaussian Beam Tracing for Shallow-Water Applications, Journal of the Acoustical Society of America, vol. 110, no. 5, pt. 2 (November), p. 6. Burke, J. P. and J. R. Zeidler. 2001.CDMA Reverse Link Spatial Combining Gains: Optimal vs. MRC in a Faded Voice-Data System Having a Single Dominant High Data Rate User, Proceedings of the IEEE Global Communications Conference (GlobeComm), http://icas.ucsd.edu/icas/faculty/zeidler/students/jBurke/OCMRC_x12.pdf Burke, J. P. and J. R. Zeidler. 2001. Data Throughput in a Multi-Rate CDMA Reverse Link: Comparing Optimal Spatial Combining vs. Maximal Ratio Combining, Proceedings of the IEEE Global Communications Conference (GlobeComm), http://icas.ucsd.edu/icas/faculty/zeidler/students/jBurke/cdmaRLdataThru.pdf Creber, R. K., J. A. Rice, P. A. Baxley, and C. L. Fletcher. 2001. Performance of Undersea Acoustic Networking Using RTS/CTS Handshaking and ARQ Retransmission, Proceedings of the IEEE/ Marine Technology Society (MTS) Oceans 2001 Conference, pp. 20832086.

Stanislaw Szpak Pamela A. Boss Power Conversion Unit
This invention describes a power-conversion unit consisting of a working electrode and counter electrode. Palladium and deuterium are co-deposited on the working electrode. During co-deposition, nuclear events of unknown origin occur resulting in enormous heat release. This heat can be used to provide power for a number of applications. Navy case 82,379; authorized for preparation of patent application, 28 June 2000.
INVENTION DISCLOSURES SUBMITTED
Stephen D. Russell Philip R. Kesten Interactive Display Device
This invention is a monolithically integrated display and sensor array that provides for interactive real-time changes to the display image. Navy case 78,287; disclosure submitted 24 October 1996.
Stephen D. Russell Randy L. Shimabukuro Yu Wang Solid-State Light Valve and Tunable Filter
This invention describes an all solid-state light valve, optical modulating device or optical filter that uses color-selective absorption at a metal-dielectric interface by surface plasmons. The invention has applications for displays in command and control, for multispectral imaging in surveillance and reconnaissance, and for filtering in optical communications and scientific instrumentation. Navy case 79,542; disclosure submitted 3 November 1997.

Stephen D. Russell

Spatially Conformable Tunable Filter
The invention provides a flexible or pliable optical modulating device, light-valve or optical filter. It uses a sheet of polymer-dispersed liquid crystal (PDLC) material and specifically selected thin-metal electrodes on either side of the PDLC to form a capacitor structure. When a voltage is applied to the capacitor, the refractive index of the liquid crystal changes since it is an electro-optic material. The optical properties of one of the thin-metal electrodes are selected in combination with the PDLC to have a surface-plasmon resonance that is either narrow band or broadband depending on the application. The surface plasmon is then used to selectively absorb incident light at the metal-PDLC interface, while the remaining light gets reflected (or transmitted). By varying the applied voltage, and its corresponding change in PDLC refractive index, we can modulate the light valve or tune the filter. The improvement over the prior art is that this can be configured conformably over a surface to improve the acceptance angle for the filter and to simplify the fabrication of the device as compared to conventional liquid crystals. Navy case 79,545; disclosure submitted 1 June 1998.

The invention is a process for removing or changing fields in the header of Transmission Control Protocol/Internet Protocol (TCP/IP) packets of a computer network system either at a firewall or router, thereby eliminating aspects of the packets that can be used to determine the operating system of the originating computer. SSC San Diego case 444; disclosure submitted 13 December 2001.

PROJECT TABLES

SSC San Diego FY 01 ILIR Database
Principal Investigator SSC SD Code Phone (619)55 DoD MA1 DoD MA2 FY FY FY 01 FY 02 $(K) FY 03 $(K) FY 04 $(K) $(K) $(K) $(K) (Planned) (Planned) (Planned) Most Strongly Supported ONR/NRL Thrust Next Most Strongly Supported ONR/NRL Thrust ONR SubElement Supported

Project Title

Keywords
CRANOF: A ComplexityReducing Algorithm for NearOptimal Fusion with Direct Applications to Integration of Attribute and Kinematic Information Acoustic Modeling in the Littoral Regime

Dr. D. E. Bamber

sensor fusion; complexity reduction; rule-based systems; uncertainty; probability logic; linguistic information; second-order probability; nonmonotonic reasoning telesonar; underwater acoustic communications; channel model; beam tracing sensors; surface-enhanced Raman spectroscopy

P. A. Baxley

Detection of Ionic Nutrients in Dr. P. A. Boss Aqueous Environments Using Surface-Enhanced Raman Spectroscopy (SERS) Automatic Matched-Field Tracking Dr. H. P. Bucker Dr. P. Calabrese

MWT MWT 79.8

2857 27360

33093 33680

CCC CCC

OSV INT

123 147
acoustics; acoustic detection and detectors entropy; uncertainty; conditionals; events; propositions; logic; probability; complexity acoustics; radiation; timedomain Kirchoff integral equation

US-1 OE-7

US-2 HP-2

31/11 14/15

Modeling of Acoustic Dr. S. Hobbs Radiation in the Time-Domain with Applications to NonLinear Structure-Acoustic Interaction Problems High-Linearity Broadband Fiber-Optic Link Using Electroabsorption Modulators with a Novel DualWavelength SecondHarmonic Cancellation Scheme Chaos Control and Nonlinear Dynamics in Antenna Arrays Neaconing: Network Meaconing for Improved Security

NOTES:

J. H. Hodiak
fiber optics; photonic link; electro-absorption (EA) modulator

Dr. V. In A. C. Judd

2363 244207

39287 34255

AAW INT

294 113

 

Tags

KAC-746 28PW6408 LS-K2464HL HP 1040 PC140 Celestron C8 Petrole Blaupunkt DXN 32PW9509 12 GE20NU10 KW-XC777 SF-R10 GPS 400 WT13J7 Remstar M Wt R50 A7V133-C FX-6300G HDR3810 1900-305 RDR-VX500 RCM 127 Thinkpad Z61 GT1150 DS2416 32PF9966 2 1 AT2635 Logic MDP-A500 Converse 2200 9R PC Viewer 9 4matic NWZ-E455 HQ8200 EB612 Vixia HG20 Roadwin Linkstation Live Lightning 3 E2562 Deskjet 3325 P7200 37PF9830 Enhwi-G2 WS-55809 RP-1000 40 MC 37LG3500 AEU MC-50mkii WX-S2000 4014NV C-370 Zoom HR2074 Frame Twin PW-AT760 Pluriel EW3003 FS-2000D DMC-LX1 Fish 4500 42LC55-ZA AEC Korg M1 C-120 Presario A900 DC-50 SCX-4623F Zidw956 SUB-zero 532 Trimble Juno LC-240C HM120JI-I KX-TG1100JT 997648 Review TLU-02241C CF-VDW07CH ST-S261 2610XI DJM-700-K Motorola V950 Shotgun N5005 NM600 Hdmi GT-M8910 DSC-T99C MC 401 Sview08 DV585K-S Alcatel-lucent 4035 M1917TM-SN Engine EVO Laserjet 2550 Graphite 1500 Watch E86 Polaroid A530 FX600 KX-P2124 Graphite 1100 All-IN-ONE

 

manuel d'instructions, Guide de l'utilisateur | Manual de instrucciones, Instrucciones de uso | Bedienungsanleitung, Bedienungsanleitung | Manual de Instruções, guia do usuário | инструкция | návod na použitie, Užívateľská príručka, návod k použití | bruksanvisningen | instrukcja, podręcznik użytkownika | kullanım kılavuzu, Kullanım | kézikönyv, használati útmutató | manuale di istruzioni, istruzioni d'uso | handleiding, gebruikershandleiding

 

Sitemap

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101