\documentclass{TEMA}
\usepackage[english]{babel}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
%\usepackage[dvips]{graphics}
\usepackage{subfigure}
\usepackage{graphicx}
%\usepackage{float}


\usepackage{epsfig}
%\usepackage{lmodern}
\usepackage[hidelinks]{hyperref}
\author{}
%\author
%    {M. T. SILVA%
%     \thanks{muriloteixeira.90@ifba.edu.br}\,,
%     Departamento de Tecnologia Eletro-Eletrônica,
%     IFBA - Instituto Federal de Educação, Ciência e Tecnologia da Bahia, CEP 40301-015 Salvador, BA, Brasil
%     \\ \\
%     L.S. BATISTA%
%     \thanks{lurimar@ifba.edu.br}\,,
%     Departamento de Matemática e Estatística,
%     IFBA - Instituto Federal de Educação, Ciência e Tecnologia da Bahia, CEP %40301-015 Salvador, BA, Brasil.
%     \\ \\
%     F.M.V. DE ALBUQUERQUE%
%     \thanks{frederico@2dn.mar.mil.br}\,,
%     Grupo de Avaliação e Adestramento em Guerra de Minas,
%     Marinha do Brasil, \(2^\circ\) Distrito Naval - Avenida das Naus, s/n, CEP 40015-270, Salvador, BA, Brasil.}
\title{Feature Extraction of Structures in Sea Water Using Self-Organizing Maps and Electromagnetic Waves \thanks{This work was made possible due to the support of the Grupo de Avaliação e Adestramento em Guerra de Minas (GAAGueM) of Marinha do Brasil, and the financial support of Fundação de Amparo à Pesquisa do Estado da Bahia (FAPESB) and CNPq in form of a Research Scholarship (PIBIC)}}
\begin{document}
\criartitulo

\runningheads {}{Feature Extraction of Structures in Sea Water Using SOM and EM Waves}
% \runningheads {SILVA, BATISTA, and DE ALBUQUERQUE}{Feature Extraction of Structures in Sea Water Using SOM and EM Waves}

\begin{abstract}
{\bf Abstract}. The use of Self-Organizing Map (SOM) algorithm for feature extraction and dimensionality reduction applied to underwater object detection with Low Frequency Electromagnetic Waves is presented. Computer simulation is used to generate a direct model for the study region, and a Self Organizing Map Algorithm is used to fit the data and return a similar model, with smaller dimensionality and same characteristics. Results show that virtual sensors are created by the SOM algorithm with consistent predictions, filling the resolution gap of the input data. These results are useful for fastening decision making algorithms by reducing the number of inputs to a group of significant data.

{\bf Keywords}. Self-organizing maps, electromagnetic imaging,
unsupervised neural networks.
\end{abstract}


\newsec{Introduction}
	Electromagnetic Waves (EM) are employed nowadays in various applications, from Global Positioning Systems and petroleum exploration \cite{batista2003scattering}, to motion trackers \cite{hansen1998magnetic}. In many areas, notably geophysics and medicine, EM methods are applied to obtain information about propagating media, to locate a new petroleum reservoir or to identify cancer in early stages \cite{poplack2004electromagnetic}.
	
	Sea water is a media of propagation commonly avoided in electromagnetic wave propagation, due to its high conductivity. In these situations, acoustic and optical transmissions are preferred \cite{shaw2006experimental}. However, in low frequencies, EM waves can propagate fairly far in conductive media, including sea water, allowing the use of this technique to identify moving and static structures under water, as in mine sweeping as a mining counter-measure for naval industries and Navy \cite{golda1992applications}.
	
	In applications such as automatic underwater mine countermeasures \cite{tan2004evaluation}, the collected data is analyzed by a specialist or input to a decision support algorithm . The amount of data processed by an algorithm can directly affect its performance, so a small set of significant data is desired to be input to such systems. A small data set can be obtained by feature extraction techniques, such as the Self-Organizing Maps (SOM).

	This work presents the application of Self-Organizing Maps to a direct Electromagnetic Simulation of dynamic structures with different {re\-sis\-tiv\-i\-ties} under sea water, reducing the amount of analyzed data to a smaller data group, which can represent the whole system for a decision making/support algorithm.
	
\newsec{Theoretical Background}

\subsection{TM Mode Electromagnetic Wave}

When a travelling electromagnetic wave changes its transmission medium, phenomena such as scattering, diffraction, reflection and refraction are observed. Mathematically, the electric and magnetic fields, in linear, isotropic and time-invariant media can be split in two linearly independent components: the primary and secondary fields. This split is described in \autoref{camptote} an in \autoref{camptotm}. The primary field is the one from the original transmission medium and the secondary field is the mathematical representation of the stated phenomena.

\begin{equation}
	\label{camptote}
	\mathbf{E_t}=\mathbf{E_p}+\mathbf{E_s}
\end{equation}

\begin{equation}
	\label{camptotm}
	\mathbf{H_t}=\mathbf{H_p}+\mathbf{H_s}
\end{equation}

For a sufficiently distant source, the electromagnetic waves are considered plane waves at the observation point. Based on this, the primary field differential equation turns into a ODE, with analytical solution \cite{batista1991}. From Maxwell equations on Frequency Domain for primary and secondary fields, the differential equation for the secondary field can be deduced \cite{batista1991}. 

On the Transverse Magnetic (TM) Mode of Propagation, the magnetic field component \(H_y\) is transverse to the direction of propagation \cite{batista2001}. The other components of the wave, the electric fields \(E_x\) and \(E_y\), are dependent to \(H_y\). This relationship is described by the Equations \ref{TMEx}, \ref{TMEy} \ref{TMHy}, where \(\mathcal{Z}=j\omega\mu \) and \( \mathcal{Y}=\sigma + j\omega\varepsilon \). The parameter \(\varepsilon\) is the electrical permitivity, \(\mu\) is the magnetic permeability, \(\sigma\) is the electrical conductivity of the medium (\(S/m\)) and \( \omega = 2\pi f\ \) represents the angular frequency (rad/s). For a linear and isotropic medium, \(\varepsilon=\varepsilon_o\) and \(\mu=\mu_o\). 

\begin{equation}
	\label{TMEx}
	E_x = - \frac{1}{\mathcal{Y}} \frac{\partial H_y}{\partial z}
\end{equation}

\begin{equation}
	\label{TMEy}
	E_z = \frac{1}{\mathcal{Y}} \frac{\partial H_y}{\partial x}
\end{equation}

\begin{equation}
	\label{TMHy}
	\frac{\partial}{\partial x} \left(\frac{1}{\mathcal{Y}} \frac{\partial H_y}{\partial x} \right) + \frac{\partial}{\partial z} \left(\frac{1}{\mathcal{Y}} \frac{\partial H_y}{\partial x}\right) = \mathcal{Z} H_y
\end{equation}

From Maxwell's Equations, the secondary magnetic field can be derived \cite{batista1991}, resulting in the differential equation shown in \autoref{TMeq}, where \(\gamma = \sqrt{\mathcal{Z}\mathcal{Y}}\) is the medium propagation constant \cite{jordan1968electromagnetic}.

\begin{equation}
\label{TMeq}
\nabla^2 \mathbf{H_s} + \mathcal{Y}\nabla \times \mathbf{H_s} \times \nabla \left(\frac{1}{\mathcal{Y}} \right) + \gamma^2 \mathbf{H_s} =\mathcal{Z} \Delta \mathcal{Y}\mathbf{H_p} - \mathcal{Y} \nabla \left(\frac{\Delta \mathcal{Y}}{\mathcal{Y}} \right) \times \mathbf{E_p}
\end{equation}

Based on the components of \(\mathbf{H_t}\) and \(\mathbf{E_t}\), the expressions of apparent resistivity and phase for the TM Mode can be derived, as shown in \autoref{appresTM} and in \autoref{phaseTM} respectively \cite{batista1991}.

\begin{equation}
	\label{appresTM}
	\rho = \frac{1}{\omega \mu} \left|\left(\frac{E_x}{H_y}\right)\right|^2
\end{equation}

\begin{equation}
	\label{phaseTM}
	\phi = \arctan \left(\frac{{Im}\left(\frac{E_x}{H_y}\right)}{{Re}\left(\frac{E_x}{H_y}\right)}\right)
\end{equation}

\subsection{EM Wave Depth of Penetration}

	Electromagnetic waves, while travelling through different media, show different behaviour due to physical and chemical characteristics of each medium. They can be generally classified as \emph{conductive} or \emph{dielectric} media, but these characteristics can be more or less evident in different frequencies of propagation \cite{jordan1968electromagnetic}.
	
	Making \(\mathcal{Y} = \sigma + j \omega \varepsilon\) and considering a plane wave, the Frequency Domain Maxwell Equation for the curl of the Magnetic field can be rewritten as \autoref{wrotmagwd}.

\begin{equation}
	\label{wrotmagwd}
	\nabla \times \mathbf{H} = \sigma\mathbf{E} + j\omega\varepsilon\mathbf{E}
\end{equation}

From \autoref{wrotmagwd}, the real part is the conduction current density while the imaginary part is the displacement current density \cite{jordan1968electromagnetic}. The ratio of these two current densities is called \emph{dielectric dissipation factor} and is shown in \autoref{dissfac}.
 
\begin{equation}
	\label{dissfac}
	D=\frac{\sigma}{\omega\varepsilon}
\end{equation}

The dissipation factor  \(D\) can be interpreted as the ratio of current that is the generated through a conductor due to a varying magnetic field. From this definition, a reference can be defined to identify if a medium shows a  conductive or dielectric behaviour. Materials with \(D >> 1\) are considered as good conductors, while media presenting \(D << 1\) are considered as good dielectrics.

In \autoref{TMeq}, the propagation constant \(\gamma\) is introduced. The propagation constant \(\gamma\) is a complex quantity written as \(\gamma = \alpha + j \beta\), where \(\alpha\) is the \emph{attenuation factor}. For its variable nature, \(\gamma\) will behave in different ways to different wavelengths. So, for good conductors, the attenuation factor \(\alpha\) is described as \autoref{alphagood}, while for good dielectrics, the \autoref{alphabad} better describes the constant \(\alpha\) \cite{jordan1968electromagnetic}.

\begin{equation}
	\label{alphagood}
	\alpha_c = \sqrt{\frac{\omega\sigma\mu}{2}}
\end{equation}

\begin{equation}
	\label{alphabad}
		\alpha_d = \frac{\sigma}{2}\sqrt{\frac{\mu}{\varepsilon}}
\end{equation}


From \autoref{alphagood} and \autoref{alphabad}, it can be seen that the media conductivity is directly proportional to the attenuation factor of the medium. Therefore, conductive media attenuates the incident electromagnetic waves, while dielectric conducts them better. The term conductivity and dielectric refers to current conduction, not to electromagnetic waves proprieties.

Based on the attenuation factor, it is possible to define the distance from the surface of a media the electromagnetic wave will be attenuated to a negligible amplitude in comparison to the amplitude from the source. This parameter is known as the \emph{depth of penetration} of the electromagnetic wave, represented by \(\delta\) and described by \autoref{deltagen} \cite{jordan1968electromagnetic}.

\begin{equation}
	\label{deltagen}
	\delta=\frac{1}{\alpha}
\end{equation}

For good conductors, the delta factor is described by \autoref{deltagood}. From \autoref{deltagood}, one can observe that the depth of penetration is inversely proportional to the waves conductivity, frequency and magnetic permeability of the medium.

\begin{equation}
	\label{deltagood}
	\delta=\sqrt{\frac{2}{\omega\sigma\mu}}
\end{equation}


\subsection{Neural Networks and Self-Organizing Maps}
	
	The human brain has an unique way of computing information, completely diverse from the digital computation, that can learn with experience through time, generalize from previous knowledge and identify patterns, associating them with the previous ones or creating a completely different kind of pattern. From computational point of view, the brain is highly complex, nonlinear and presents one of the highest degrees of parallelism \cite{haykin1999neural}.

	Artificial Neural Networks (ANNs), as described in \cite{rojas1996neural}, "are an attempt at modelling the information process capabilities of the nervous systems". ANNs are computational models that, from a mathematical approach, try to mimic the functioning of the brain in order to achieve some characteristics of central nervous systems that are useful in information processing. Nowadays, ANN algorithms are widely used in pattern recognition, nonlinear system identification, function approximation and control systems \cite{haykin1999neural}.
	
	The originative work for the development of ANNs was paper from McCulloch and Pitts, describing the functioning of an \emph{artificial neuron}. An artificial neuron is the fundamental element of every neural network, uniting elements of neurophysiology and mathematics. The McCulloch and Pitts neuron is divided in three basic elements \cite{haykin1999neural}
	
	\begin{enumerate}
		\item Set of \emph{synapses} or \emph{connecting links}: This is where the input information arrives the artificial neuron. These connecting links are weighted and these weights are modified to adapt the neuron to the input-output relationship.
		\item \emph{Linear Combiner}: This element combines, through summation, the weighted entries. An additional weighted entry called \emph{bias} is added to the linear combiner to adjust the stability of the neuron.
		\item \emph{Activation Function}: Function that takes as argument the output of the linear combiner, adjusting the output to a range of limited values.
	\end{enumerate}
	
	Many network topologies can be obtained by combining these neurons in different structures. These topologies are suited for different applications, and single topologies can be modified to different purpose networks. 
	
	Every neural network can be classified by their learning paradigm. There are two main groups of learning paradigm: the \emph{supervised learning} and the \emph{unsupervised learning}. The supervised learning paradigm is most commonly used in function fitting, as for this kind of network, there is a "right answer" for a given input, arranged as \emph{input-output pairs}. On the other hand, unsupervised learning are more commonly used in pattern recognition or dimensionality reduction, since they rely only on the input data to map a solution, usually a map of the input signal. Hybrid approaches can be achieved, using both paradigms in different stages of the training \cite{haykin1999neural}.
	
	One of the most widely known unsupervised learning ANN is the \emph{Self-{Or\-gan\-iz\-ing} Map} (SOM), also known as \emph{Kohonen Networks}, in honour to Teuvo Kohonen, author of the seminal paper of this kind of ANN. The main objective of a SOM is to map an arbitrary dimension input into a one or two-dimensional neuron grid of neurons, used as a discrete map. All  neurons of the grid are fully connected to the input layer, mapping every characteristic that is presented to the network.
	
	Kohonen networks are based in \emph{competitive learning}: once the input is given to the network, their neurons compete to be activated by the it, becoming more specialized in this kind of input, activating when similar signals are presented to the network. Their weight adapt to be closer to the activation input. At first, these neurons are initialized with random weight values, or using existing values of the input. Once the network is initialized, three basic steps are followed in the formation of the self-organizing map, as follows \cite{haykin1999neural}
	
	\begin{enumerate}
		\item Competition: In this step, for each input presented to the network, each neuron of the grid compute a value for a discriminant function, ranking them for the competition. The winner is chosen by their discriminant value.
		\item Cooperation: The winning neuron in a fixed location of the neuron grid determines the spatial location of a topological neighbourhood of excited neurons that will share the results of the winning neuron.
		
		\item Synaptic Adaptation: These excited neurons in the topological neighbourhood have their weights adjusted in order to increase their discriminant function value, increasing their chances to be activated to a similar input. The winning neuron receives the greater modification, reducing the significance of the adaptation for farther neurons. The farther the neuron is from the winning unit, less "benefit" it will receive for the winning.
	\end{enumerate}
	
	The most common discriminant function is the \emph{Euclidean distance} between the input value and the weights of the neurons. For each neuron, the closer they are from the input, in other words, the smaller their Euclidean distance from the input, more likely they are to be activated by that input. So, the winning neuron will be the one with the smaller distance from the input signal. This relationship is shown in \autoref{euclidis}, where \(i(\mathbf{x}\) is the position of the winning neuron of the grid in respect to the \(\mathbf{x}\) input and \(\mathbf{w}_j\) is the weight vector of the \(j\) neuron.
	
	\begin{equation}
	\label{euclidis}
		i(\mathbf{x}) = \arg \min_j \| \mathbf{x} - \mathbf{w}_j \|
	\end{equation}
	
	After the winning unit is identified by the discriminant function, the topological neighbourhood must be determined. Usually, a Gaussian neighbourhood is chosen, due to its properties \emph{haykin1999neural}. The topological neighbourhood function \(h_{j,i(\mathbf{x})}\) is presented in \autoref{toponeighbour}.
	
	\begin{equation}
		\label{toponeighbour}
		h_{j,i(\mathbf{x})}(n) = \exp\left(-\frac{d_{j,i}^2}{2\sigma^2(n)}\right)
	\end{equation}
	
	The topological neighbourhood is dependent of two factors: the lateral distance \(d_{j,i}\) and the function width \(\sigma\), which stands for the standard deviation (or variance of the Gaussian for \(\sigma^2\) ). The lateral distance stands for the distance between the winning neuron and its neighbours and it is defined by \autoref{latdis}, where \(\mathbf{r}_i\) is the position of the winning neuron and \(\mathbf{r}_j\) is the position of the neighbouring unit analyzed by \autoref{toponeighbour}. The function width \(\sigma\) quantifies the specialization of the network, reducing the topological neighbourhood with time, or by iteration \(n\) in discrete time domain. It is defined by \autoref{stdh}, where \(\tau_1\) is the specialization time constant and \(\sigma_o\) is the starting function width, commonly the radius of the output neuron grid.
	
	\begin{equation}
		\label{latdis}
		d_{j,i} = \| \mathbf{r}_j - \mathbf{r}_i \|
	\end{equation}
	
	\begin{equation}
		\label{stdh}
		\sigma (n) = \sigma_o \exp\left(-\frac{n}{\tau_1}\right)
	\end{equation}
	
	The adaptive stage updates the weights of the activated neurons, inside the topological neighbourhood. The fundamental rule for the adaptive stage is presented in \autoref{somupd8}, where \(\eta(n)\) is the learning rate of the SOM. This learning rate also drops exponentially, as shown in \autoref{learnrate}, where \(\eta_o\) is the starting learning rate, and \(\tau_2\) stands for the time constant of the learning cycle. This cycle will repeat until the changes in characteristic map are negligible. The ideal values for these constants are presented in \cite{haykin1999neural}.
		
	\begin{equation}
		\label{somupd8}
		\mathbf{w}_j (n+1) = \mathbf{w}_j (n) + \eta(n) h_{j,i(\mathbf{x})}(n) (\mathbf{x}-\mathbf{w}_j (n))
	\end{equation}
	
	\begin{equation}
		\label{learnrate}
		\eta(n) = \eta_o \exp\left(-\frac{n}{\tau_2}\right)
	\end{equation}
	

\newsec{Methodology}

	Based on the presented theoretical background, an approach to the problem was chosen. The main objective of this work is to identify conductive and resistive dynamic structures inside a conductive medium, particularly sea water, using Kohonen Networks.
	
	
	The direct model for the dynamic structures under sea water was obtained from computer simulation using the Finite Elements Method \cite{batista1991}, based on the model described by \autoref{drawing}.

\begin{figure}[ht]
\centering %
\scalebox{1.0}{\includegraphics{drawing.eps}}
\caption{Geometrical representation of the region of study, with two moving objects (\(\rho_1 = 0.066\   \Omega . m \), \(\rho_2 = 10 \  \Omega . m \)) immerse in sea water (\(\rho_0 = 0.196\  \Omega . m \)), with 11 EM receptors.}
\label{drawing}
\end{figure}
	
	The region of study had a range from \(-50\  m\) to \(50\  m\) along the sea line, with \(11\) EM receptors equally spaced at the surface (\(y=0\)). A sufficiently distant source generated EM plane waves, within a frequency range from \(10^{0}\  Hz\) to \(10^4\  Hz\), which in sea water indicates a depth of penetration varying from \(222.817\  m\) to \(2.228\  m\). Snapshots were taken with \(20\  s\) difference between each other, from \(0\  s\) to \(160\  s\).
	%\(20\  s\) to \(180\  s\)
	
	 Two objects are presented, one being more (\(\rho_1 = 0.066\   \Omega . m \)) and other less (\(\rho_2 = 10 \  \Omega . m \)) conductive than the sea water. The first one with width of \(5\   m\) along the water line and height of \(8\  m\) into the sea. The second object is \(10\  m\) long and \(8\  m\) tall. Both objects are immerse in sea water (\(\rho_0 = 0.196\  \Omega . m \)), and moving along the \(y\)-axis, with the conductive object sinking and the resistive rising at a speed of \(1\  m/s\), starting at the depth of \(20 m\) and \(180 m\) respectively. The conductive and resistive objects are respectively centred at \(0\ m\) and \(-25\  m\) along the sea line (\(x\)-axis). The choice of a conductive and a resistive objects is due to the resistivity contrast of these objects and the surrounding media. In such conditions, the rising object may eclipse the sinking one, making it harder to detect.
	 
	 The accuracy of the simulation method is presented in \cite{batista2001}, comparing with analytical solutions. 1111 points per time interval were generated, making 9999 points in total. A Kohonen Network algorithm was written using parameters indicated by \cite{haykin1999neural} and presented in \autoref{tableconstants}.
	
	\begin{table}[h]
	\caption{Ideal Values for SOM Parameters.} \label{tableconstants}
	\begin{center}
	\begin{tabular}{|c|c|c|}
	\hline Parameter & Symbol & Value\\
	\hline
	\hline Initial Learning Rate & \( \eta_o \) & \(0,1\)\\
	\hline Specialization Time Constant & \( \tau_1 \) & \(\frac{1000}{\log \sigma_o}\)\\
	\hline Learning Cycle Time Constant & \( \tau_2 \) & \(1000\)\\
	\hline Initial Function Width & \( \sigma_o \) & Neuron Grid Radius\\
	\hline
	\end{tabular}
	\end{center}
	\end{table}

	One of the parameters of the Self-Organizing Map is the number of iterations needed for the convergence and attainment of the statistical characteristics of the input data. As described in \cite{haykin1999neural}, the number of iterations \(I\) is defined by a simple formula, based on the number of neurons present in the output grid, described by \autoref{nint} where \(n\) is the number neurons of the output grid.
	
	\begin{equation}
		\label{nint}
		I = 1000+500 \cdot n
	\end{equation}
	
	The output of the SOM was a two-dimensional neuron grid, with 100 neurons, disposed in \(10\) rows and \(10\) columns. The results of the simulation for each time were input to the Kohonen Network, analyzing 4 characteristics: \(x\)-axis position, \(y\)-axis position, apparent resistivity and phase. After this, the results of the Kohonen Networks were compared with the actual results from simulation, to prove if it fits the data correctly.
	
	By using this methodology, an input data dimensionality reduction is expected, optimizing decision-making algorithms, that will no longer need to process a huge amount of data, using instead the Kohonen Network output neurons, which represents its input data with less data.	
	
\newsec{Results and Discussions}
	
	The results from simulation and from the Kohonen Networks are presented in \autoref{SOMt1} to \autoref{SOMt9}, showing the results for the phase changing for the depth and for the position along the sea line, and a map for apparent resistivity and phase for each time step. In these graphics, the grey circles are the original measurements generated by simulation, and the black triangles are the SOM outputs, a dimensional reduction for the input data. For an homogeneous primary medium, the phase angle is \(\phi = 45^\circ\). Therefore, the reference for the phase map is \(45^\circ\). 
	
	In Figure \autoref{PRHT1} a tendency for a greater phase angle can be observed, with a smaller apparent resistivity. Apparent resistivity is different from real resistivity, only representing the comparison between the object resistivity with reference to the primary medium. Therefore, an object with resistivity smaller than sea water \(\rho_0\) is observed. However, it must be noted that the resistive object is still not visible to the set of sensors, due to the shadowing caused by the conductive object and its distance from the receptors.

	
\begin{figure}[!htbp]
\centering %
\subfigure[Phase \(\phi\) \emph{versus} Apparent Resistivity \(\rho\) \label{PRHT1}]{\scalebox{1.2}{\includegraphics{Kohonen001PRHT.eps}}  } %
\\%\vspace{.3cm}  %
\subfigure[Object Position Along the Sea Line \emph{versus} Phase \(\phi\) \label{XPT1}]{\scalebox{1.2}{\includegraphics{Kohonen001XPT.eps}} } %
\\%\vspace{.3cm}  %
\subfigure[Depth \emph{versus} Phase \(\phi\) \label{YPT1}]{\scalebox{1.2}{\includegraphics{Kohonen001YPT.eps}} } %
\caption{Simulated electromagnetic data (circle) and SOM dimensionality reduction output (triangle) at \(t=0\  s\).} %
\label{SOMt1}
\end{figure}


	Observing the SOM results in Figure \autoref{PRHT1} compared to the original measurements, some neurons were fitted off the original data. It can show an algorithmic failure or an actual probability distribution, but it can only been determined with a greater resolution map, with more receptors. However, receptors for this frequency range have great dimensions, which means that only a small number of EM receptors can be placed in the map. 
	
	At the same time, in Figure \autoref{XPT1}, the horizontal position of the object can be determined, together with its width. From the graphic, the barycentre can be noted at the origin, with an approximate width of \(20 m\). The object seem to be bigger than it really is due to the resolution of the measurement. With more sensors an accurate measurement could be done. 
	
	The SOM algorithm results observed in Figure \autoref{XPT1} fitted this dimension with great accuracy, with a little or no loss in data. Some neurons are even located in \(-5\  m\) and \(5\  m\), predicting a high phase region in this area. So, the SOM maps could, in this case, compensate a resolution loss, placing neurons in regions with lack of receptors.
	
	The Figure \autoref{YPT1} shows the relationship between phase and the object depth into water. The phase changing position usually represents a change of medium for the electromagnetic wave. In this figure, the conductive object is mapped to be between \(10\  m\) and \(25\  m\), within the range of the conductive structure. 
	
	Again, the SOM algorithm was able to fit the data in Figure \autoref{YPT1}, even using the probability density estimation inherent to this kind of algorithm to place a "virtual sensor", as shown in Figure \autoref{XPT1}. The results are consistent with the phase shift of the "virtual sensor" in \autoref{XPT1}, which means that the algorithm isn't creating spurious data.
	
\begin{figure}[!htbp]
\centering %
\subfigure[Phase \(\phi\) \emph{versus} Apparent Resistivity \(\rho\) \label{PRHT3}]{\scalebox{1.2}{\includegraphics{Kohonen003PRHT.eps}}  } %
\\%\hspace{.3cm}  %
\subfigure[Object Position Along the Sea Line \emph{versus} Phase \(\phi\)\label{XPT3}]{\scalebox{1.2}{\includegraphics{Kohonen003XPT.eps}} } %
\\%\hspace{.3cm}  %
\subfigure[Depth \emph{versus} Phase \(\phi\) \label{YPT3}]{\scalebox{1.2}{\includegraphics{Kohonen003YPT.eps}} } %
\caption{Simulated electromagnetic data (circle) and SOM dimensionality reduction output (triangle) at \(t=60\  s\).} %
\label{SOMt3}
\end{figure}

	\autoref{SOMt3} presents the next analyzed time step.  At \(60\  s\), both objects are observed, as demonstrated in \autoref{PRHT3}. This is reassured in the next graphs, where the conductive object appear farther than the last observation (\(y = 60\  m\)) while the resistive object approaches (\(y=140\  m\)), making the width prediction inaccurate, as shown in \autoref{XPT3} and \autoref{YPT3}.

	The Kohonen Network Fitted the data with great accuracy, placing again "Virtual Sensors" between the actual ones. These "virtual sensors" didn't create spurious data, with predictions consistent with the measured data.

\begin{figure}[!htbp]
\centering %
\subfigure[Phase \(\phi\) \emph{versus} Apparent Resistivity \(\rho\) \label{PRHT5}]{\scalebox{1.2}{\includegraphics{Kohonen005PRHT.eps}}  } %
\\%\hspace{.3cm}  %
\subfigure[Object Position Along the Sea Line \emph{versus} Phase \(\phi\) \label{XPT5}]{\scalebox{1.2}{\includegraphics{Kohonen005XPT.eps}} } %
\\%\hspace{.3cm}  %
\subfigure[Depth \emph{versus} Phase \(\phi\)\label{YPT5}]{\scalebox{1.2}{\includegraphics{Kohonen005YPT.eps}} } %
\caption{Simulated electromagnetic data (circle) and SOM dimensionality reduction output (triangle) at \(t=100\  s\).} %
\label{SOMt5}
\end{figure}

	At the time step shown in \autoref{SOMt5}, the resistive object starts do shadow the conductive one, as shown in \autoref{PRHT5}. Now its actual horizontal position can be predicted between \(-50\  m\) and \(-10\  m\), with side lobes present from \(0\  m\) to \(30\  m\). The sea depth of the object is predicted between \(80\  m\) and \(100\  m\), range that covers the object. The SOM algorithm fitted well the parameters, again presenting accurate virtual sensors.

\begin{figure}[!htbp]
\centering %
\subfigure[Phase \(\phi\) \emph{versus} Apparent Resistivity \(\rho\) \label{PRHT7}]{\scalebox{1.2}{\includegraphics{Kohonen007PRHT.eps}}  } %
\\%\hspace{.1cm}  %
\subfigure[Object Position Along the Sea Line \emph{versus} Phase \(\phi\) \label{XPT7}]{\scalebox{1.2}{\includegraphics{Kohonen007XPT.eps}} } %
\\%\hspace{.1cm}  %
\subfigure[Depth \emph{versus} Phase \(\phi\) \label{YPT7}]{\scalebox{1.2}{\includegraphics{Kohonen007YPT.eps}} } %
\caption{Simulated electromagnetic data (circle) and SOM dimensionality reduction output (triangle) at \(t=140\  s\).} %
\label{SOMt7}
\end{figure}

	In \autoref{SOMt7}, the same behaviour is present, both for the measurement data and SOM algorithm, with accurate virtual sensors. The range of the resistive object can be predicted, with the barycentre predicted by a virtual sensor at \(-25\  m\), precisely its position. The object width is reassured, between \(-40\  m\) and \( -10\  m\). If the SOM virtual sensors is taken into account, the object range falls to \(-35\  m\) to \(-15\  m\), precisely the object range. The depth of the object can be predicted between \(50\  m\) and \(80\  m\), but the SOM algorithm places the object at \(65\  m\), a \(5\  m\) error.

\begin{figure}[!htbp]
\centering %
\subfigure[Phase \(\phi\) \emph{versus} Apparent Resistivity \(\rho\) \label{PRHT9}]{\scalebox{1.2}{\includegraphics{Kohonen009PRHT.eps}}  } %
\\%\hspace{.1cm}  %
\subfigure[Object Position Along the Sea Line \emph{versus} Phase \(\phi\) \label{XPT9}]{\scalebox{1.2}{\includegraphics{Kohonen009XPT.eps}} } %
\\%\hspace{.1cm}  %
\subfigure[Depth \emph{versus} Phase \(\phi\) \label{YPT9}]{\scalebox{1.2}{\includegraphics{Kohonen009YPT.eps}} } %
\caption{Simulated electromagnetic data (circle) and SOM dimensionality reduction output (triangle) at \(t=180\  s\).} %
\label{SOMt9}
\end{figure}

In \autoref{SOMt9}, however, the virtual sensors placed by SOM algorithm are present, but not dominant within the object range, increasing again the possible width of the object. Still, the virtual sensors generated consistent results, within the measurement range. On the other hand, the object as precisely located \(20\  m\) deep in water.


\newsec{Conclusion}

	A dimensionality reduction and feature extraction algorithm for object detection in conductive media using Self-Organizing Maps (SOM) is presented. Using simulated data as input for the SOM algorithm, a set of representative units were fitted to the input data.
	
	The SOM algorithm outputs were able to fit the input data and to attain its probability distribution, generating virtual sensors, neurons fitted in high probability regions, with accurate data, even with a difficult propagation media for the electromagnetic signal such as sea water. 
	
	In some cases, the virtual sensors were able to identify the moving objects, helping to differentiate them and sometimes precisely predicting their boundaries. The electromagnetic signal alone wasn't able to get such precise results. Best results were achieved in less extreme situations, in which the objects are further from the sensors and neither of them have a strong predominance in detection. 
	
	The benefits on using the SOM algorithm outputs instead of the measured data is the dimensionality reduction, from \(9999\) points to \(100\) representative data points, which fastens posterior algorithms, i.e. decision support algorithms in naval systems.
		
	Further work can expand the EM analysis for a three dimensional input and develop a decision making algorithm using artificial intelligence. These algorithms combined can create a decision support software to naval systems. The same feature extraction algorithm can be used in other areas, such as  bioelectromagnetics, to detect anomalies in tissues from EM imaging.
	

\begin{abstract}
{\bf Resumo}. O uso do algorítmo de Mapas Auto-Organizáveis (SOM) para extração de características e redução de dimensionalidade aplicado à detecção de objetos subaquáticos em regiões marinhas foi apresentado.  Simulações em Elementos Finitos foram utilizadas para a geração de um modelo direto da região de estudo e, a partir destes dados, um Mapa Auto-Organizável foi utilizado para ajustar os dados e retornar um modelo similar, com dimensionalidade menor e mesmas características. Sensores Virtuais foram criados automaticamente pelo algoritmo SOM com resultados consistentes, completando os vazios de resolução dos dados simulados. Estes resultados são úteis para acelerar algoritmos de auxílio à tomada de decisão reduzindo o número de entradas destes algoritmos e em outros assuntos, como bioeletromagnética.

\end{abstract}

\bibliographystyle{abbrv}
\bibliography{artigoTEMA}
\end{document}