The new Deep Web

I know, you fell on this site by chance and you don't even know what Deep Web is? Let's get to a quick understanding of the subject and its future in this chapter and then you will be able to know more about what it is and how to use the network, as well as to know what is going wrong.

The Deep Web, or The Deep Internet

The depths of the ocean are so unequaled to that of the earth's crust, and so let's also say that what we can say about the Deep Web, is an ocean of many contents, films, programs, and even websites, which are not available on the surface layer .

To understand a little more about the subject, we recommend searching the same terms: P2P, Tor.

Some sites say that we see only 15% of the entire internet, others venture to say that what we have available is only 10%, but the reality is that it is impossible to calculate something that is no longer under our control.

Even the physical ocean has not been able to fully exploit, the P2P networks involved in data traffic between own nodes, can be anywhere, on any machine, anywhere on the planet.

Currently, there are applications for File Sharing and Hidden Services even on Android phones, and with the increasing expansion of the Internet, controlling each device is an impossible task to be achieved.

The concept of the Deep Web comes from internet content not available on centralized servers and located in professional data centers, but spread across a network of devices worldwide, which can be from a computer, a router or even a cell phone .

When downloading files through BitTorrent software, you are in a way bringing content from the Deep Web, and also contributing to its growth, as you will also start to provide part of the data you downloaded to another client.

Probably the site from which you looked for the reference is available online, but the download file is not. In fact, the file is spread on several computers around the world, and it will exist as long as it is popular.

That is why calculating the size of the Deep Web is impossible, as it is totally volatile, the files and contents are not there forever, they are as long as the files are popular.

The Deep Web has several layers, many people only reach the first layer and are unaware of the other layers. We will see a little of the existing layers, and, not just being limited to these as there may be others.

The first level of the Deep Web: File sharing networks.

BitTorrent, let’s say the first layer of the Deep Web, is an area that is still detectable, but being able to take a file off the air is already an impossible task.

A torrent file is found on several machines around the world, and it is users' computers that are downloading content, and at the same time they are sending one to the other.

The files are located through trackers, which are servers that store information about parts of the files and their current locations. As soon as a torrent hash enters the tracker, it tries to share the existing information from the torrents between the hashes available between them and other tracker servers, and so BitTorrent clients receive IP address blocks to make connections and bring content.

If someone publishes a secret file, at this level it is still possible to determine who did this, as there are no safe levels of sharing files without being recognized.

In addition to BitTorrent, there are file sharing networks, known as Gnutella, Emule, and others. Usually, each of these networks has a client application or more. These are currently not as common and are not as effective as torrents, as a group of files was lost among several files.

There are file sharing networks that have features to keep your users anonymous, but they are not at this level of the Deep Web.

The second level of the Deep Web: Overlay networks.

At this level, the anonymous part of the resources on the internet also starts, that is, the services here start to enter the level where they cannot be located or tracked, at least in an easy way.

This is because networks within the Deep Web are called overlay networks, or in a friendly way "network over network".

One of the best known networks is the TOR network, followed by I2P, Freenet, and others that are less influential.

Both TOR and I2P provide the user who connects to the network, also a protection to surf the internet with their obfuscated IP address, that is, you receive a different IP address for each connection.

Within the network, your location address becomes a secret nickname, where only the same destination route knows its origin, but the final node never knows where it went, as in the case of the traditional internet where there is a way to detect the path of packets using tools like tracert.

With the Marco Civil da Internet, access providers (Telefônica, NET, Intelig) can no longer store the information of the URLs that users access (but they must store the network access log), but the content providers (facebook, Google ) can and should store all user access.

If you access content with an IP address different from your current one, you will no longer be located, and this in some way guarantees your privacy on the network.

Although these networks are somewhat financed by governmental institutes, there are also government agencies around the world like the NSA and FBI that are constantly struggling to break down and track where a hidden service is or track who is responsible for content publishing.

We constantly see within the Tor network, several hidden services servers have their content suspended by the government that managed to locate the dedicated servers and disable their services.

People can be detected on these networks but are caused by various attacks. It is recommended to always keep the software updated through the official websites so that it is not tracked.

Certain tips for not being tracked on these networks:
- Disable Javascript: There are websites created by the teams of attackers who, using Javascript, discover the real IP address of your machine. In Firefox, about: config> javascript.enabled> false.
- Disable the HTTP referer header: When accessing a service, the browser always delivers where a given access came from, so the referer ends up being discovered. In Firefox, about: config> network.http.sendRefererHeader> 0
- Use DNS only within the network: Sometimes the browser stops using the DNS of the network proxy to use the query of your internet connection. In Firefox, force the use at about: config> network.proxy.socks_remote_dns> true.

General tips:
- Never use real data like name, email, etc. to access hidden services.

The third level of the Deep Web: Secret and undetectable networks.

Freenet is a concept of data transfer network, based on a technique of sharing files anonymously and restrictedly between all computers using the application.

There are two ways of working: Restricted to your friends' computers or the less insecure way that is connected to the internet through anyone using this mode.

Unlike TOR and I2P, this network does not allow you to access the internet through it.

When accessing the network, a local IP address is provided on a different port, for example: and on this home page, you have what you need to access the Freenet network.

When starting to use the network, initially it is very slow because it needs to load about 10 Gb to 20 Gb of data on your computer, usually this occurs in the first 24 hours of using the network.

As there are no servers in this network as in the TOR network, there is no dynamic content. That is, all content on the websites of this network is a handful of files that run on the client's browser, just like any other file that is downloaded or uploaded to the network.

Through the home page, you have a list of home sites and global indexes to start your searches.

When you start using the network, there is a question to answer about the size you want to have available on Freenet, between 1 and 10 Gb, and depending on your choice, it will also be the content that the network needs to load on your computer for it to be functional .

Freenet in its public mode has some centralized servers, but its most secure way is to use privately, which can be between trusted computers, neighbors, etc.

The future of Deep Web

Several services that were previously anonymous are now under constant attack. Government departments also create hidden services in order to track users belonging to these networks using the most diverse techniques.

The future of the network is still uncertain, as the government itself invests in the creation and development of some of them, and they are themselves trying to identify users who cause crimes on the internet.

But one thing is certain: each day more and more new methods appear to continue the growing Deep Web, and as long as there are devices connected to the internet, the Deep Web will exist in one way or another.

There are new features in the TOR itself to increase the security levels of users now with full data relay, that is, TOR will now traffic TCP packets in full, not just more proxy requests by browsers, thus opening the possibility to create fully connected to hidden nodes or hidden gateways.

The new Deep Web will become an interconnected network between several networks, to access it is not enough to access one network, but access all, increasing the connection tunnels more and more and increasing your level of online privacy (or you like tracked and chased?).

No comments