Skip to main content

Honeypots and Honeynets

What is a honeypot?

Honeypots are systems used to gather information about the activity of attackers or intruders to a system. It acts like a trap to detect how a user approaches/intercepts a system, how they behave once intercepted and stores these data into its database (here the database means a storage area, not a collection of data records).

A honeypot placed within the DMZ


















What makes a honeypot?
Building a honeypot requires a PC with more preferably a UNIX based operating system and a sniffer tool. (A sniffer tool provides the capability of seeing the traffic going between the firewall and the honeypot) 

Where can honeypots be placed?

Honeypots can be placed anywhere in the system. They may be placed outside the DMZ, inside the DMZ or even on the internal network. 

Honeypots are additional security system. Honeypots differ from firewalls in that honeypots do not filter the traffic passing them and honeypots differ from intrusion detection systems (IDSs) in that honeypots are not 'alarm-borne' devices to predetermined threats. 

Honeypots are used only for collection of abnormal behavior of an individual and to record the behavior if matched with the general behavior of a predetermined attacker.

The actual placement of a honeypot may vary depending on the requirement or the service expected from them. For number of reasons many companies use many honeypots (within the internal network, outside the DMZ or within the DMZ). 




















The placement of a honeypot, as in the case of firewalls and IDSs, is important. For example, a honeypot that is established within a DMZ would not record the abnormal behavior of an individual/traffic directed towards a network) before the traffic reaches the DMZ. Therefore, the malicious behavior of an attacker attempting to attack a DMZ cannot be recorded. 

On the other hand, if honeypots are placed outside the DMZ (both before the DMZ and within the internal network), the behavior of an attacker before a DMZ and within the internal network could be recorded.

What are the goals behind setting up a honeypot?

Any honeypot (regardless of its position within a network) is expected to provide two main goals:

1. To record and learn how an intruder/attacker may penetrate a system.

2. Gather forensic information for the prosecution of intruders.


Honeynets-

Two or more honeypots make a honeynet. A honeynet is used for monitoring a large network in where a single honeypot may not be able to handle the goals expected from it.


















To efficiently centralize a honeynet and the analysis tools, a honeyfarm is used.

Comments

Popular posts from this blog

First-fit vs Best-fit Allocation algorithms

What are they? First-fit and Best-fit are memory allocation schemes that were typically used in dynamic and fixed partition memory allocation schemes to allocate memory. To simplify what they are, consider the following list of jobs and the available resources. The above figure shows a list of jobs and the resources to which these jobs could be allocated.  Here to allocate the jobs, we use either the first-fit memory allocation scheme or the best-fit memory allocation scheme. The first-fit memory allocation scheme allocates on the basis of the first partition fitting the requirements.  The best-fit memory allocation scheme allocates on the basis of the best partition (one with the least memory wastage) fitting the requirements. Let us allocate the jobs first on the basis of first-fit memory allocation and then on the basis of best-fit memory allocation to understand how jobs are allocated and the pros and cons of the two schemes. Allocating using First-fit memor

OS security (Windows NT logon process)

What is the Windows NT logon process? Windows NT process is the process by which the operating systems belonging to the Windows NT family start up. The logon process for operating systems introduced by Microsoft since Windows Vista uses a slightly different architecture (methodical steps), but many steps in the windows NT process have been repeated There are many steps involved with the Windows NT logon process but this blog post summarizes the most important steps and definitions. The basic architecture of this process can be summarized as follows: A simplified Windows NT logon process Now let us understand what each section of the above flow diagram does in the Windows NT logon process. Main sections of the logon process- Security reference monitor- The security reference monitor is used to ensure which subjects have authorization to which objects. Thus, it uses the access control policy used in the operating system for its basic functioning. A security refere

Goals of computer security

What are the goals of computer security? The number of goals concerning computer security is highly debatable. However, every thoery that is been presented to date ensures three main types of goals: confidentiality, integrity and availability. In this blog post I have mentioned five types of security goals: confidentiality, integrity, availability, authenticity and non-repudiation / accountability. The following figure summarizes what each of these goals mean and who are involved with these goals and what needs to be done to ensure the goals' performance. Click on image to zoom Confidentiality- Confidentiality means that information or services should only be accessed by authorized personnel.  Click on image to zoom   Integrity-   Integrity means that information or services should only be modified by authorized personnel. Click on image to zoom Availability- Information or services should be available to authorized personnel when

Multiprocessing Configurations

Multiprocessing is the use of two or more processors to share system resources. Multiprocessing enhances a system's performance by increasing reliability and enhancing faster processing. For example, consider a simple set of jobs as follows: When the execution is done using a single processor, only one job could be executed at a time. The other jobs have to wait until the job which is being currently executed runs to completion or preempted. This seriously degrades system performance since the waiting queues increase over time and the throughput is decreased. Some jobs may even lead to starvation (more about these concepts in the blog post on deadlocks and starvation). However, when the execution is done using multiple processors, many jobs can execute at the same time. This does not eliminate waiting queues or aging, but generally reduce them. Moreover the throughput is increased while decreasing the turnaround time. However, with every new concept comes a n
DMCA.com Protection Status