Skip to content

Commit

Permalink
chore(glossary): remove duplacation of "Denial of Service" (mdn#37215)
Browse files Browse the repository at this point in the history
* chore(glossary): remove duplacation of 'Denial of Service'

* improve document titles
  • Loading branch information
leon-win authored Dec 16, 2024
1 parent 8d5d188 commit c13b7a0
Show file tree
Hide file tree
Showing 7 changed files with 31 additions and 55 deletions.
1 change: 1 addition & 0 deletions files/en-us/_redirects.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3560,6 +3560,7 @@
/en-US/docs/Glossary/Client_hints /en-US/docs/Web/HTTP/Client_hints
/en-US/docs/Glossary/Condition /en-US/docs/Glossary/Conditional
/en-US/docs/Glossary/Content_type /en-US/docs/Glossary/MIME_type
/en-US/docs/Glossary/DOS_attack /en-US/docs/Glossary/Denial_of_Service
/en-US/docs/Glossary/DTD /en-US/docs/Glossary/Doctype
/en-US/docs/Glossary/Descriptor_(CSS) /en-US/docs/Glossary/CSS_Descriptor
/en-US/docs/Glossary/Distributed_DenialofService /en-US/docs/Glossary/Distributed_Denial_of_Service
Expand Down
12 changes: 0 additions & 12 deletions files/en-us/_wikihistory.json
Original file line number Diff line number Diff line change
Expand Up @@ -1897,18 +1897,6 @@
"ajinkya_p"
]
},
"Glossary/DOS_attack": {
"modified": "2019-03-23T23:08:00.112Z",
"contributors": [
"SebastienParis",
"Sodan",
"klez",
"Aleksej",
"Andrew_Pfeiffer",
"pbmj5233",
"RufusCSharma"
]
},
"Glossary/DTLS": {
"modified": "2019-12-09T06:56:39.078Z",
"contributors": [
Expand Down
25 changes: 22 additions & 3 deletions files/en-us/glossary/denial_of_service/index.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,30 @@
---
title: Denial of Service
title: Denial of Service (DoS)
slug: Glossary/Denial_of_Service
page-type: glossary-definition
---

{{GlossarySidebar}}

**DoS** (Denial of Service) is a category of network attack that consumes available server resources, typically by flooding the server with requests. The server is then sluggish or unavailable for legitimate users.
**Denial of Service** (DoS) is a category of network attack that consumes available {{Glossary("server")}} resources, typically by flooding the server with requests. The server is then sluggish or unavailable for legitimate users.

See {{glossary("DOS attack")}} for more information.
Computers have limited resources, for example computation power or memory. When these are exhausted, the program can freeze or crash, making it unavailable. A DoS attack consists of various techniques to exhaust these resources and make a server or a network unavailable to legitimate users, or at least make the server perform sluggishly.

There are also {{Glossary("Distributed Denial of Service", "Distributed Denial of Service (DDoS)")}} attacks in which a multitude of servers are used to exhaust the computing capacity of an attacked computer.

### Types of DoS attack

DoS attacks are more of a category than a particular kind of attack. Here is a non-exhaustive list of DoS attack types:

- bandwidth attack
- service request flood
- SYN flooding attack
- ICMP flood attack
- peer-to-peer attack
- permanent DoS attack
- application level flood attack

## See also

- [Denial-of-service attack](https://en.wikipedia.org/wiki/Denial-of-service_attack) on Wikipedia
- [Denial of Service](https://owasp.org/www-community/attacks/Denial_of_Service) on OWASP
8 changes: 4 additions & 4 deletions files/en-us/glossary/distributed_denial_of_service/index.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
---
title: Distributed Denial of Service
title: Distributed Denial of Service (DDoS)
slug: Glossary/Distributed_Denial_of_Service
page-type: glossary-definition
---

{{GlossarySidebar}}

A **Distributed Denial-of-Service** (DDoS) is an attack in which many compromised systems are made to attack a single target, in order to swamp server resources and block legitimate users.
**Distributed Denial-of-Service** (DDoS) is a type of {{Glossary("Denial of Service", "DoS")}} attack in which many compromised systems are made to attack a single target, in order to swamp server resources and block legitimate users.

Normally many persons, using many bots, attack high-profile Web {{glossary("server","servers")}} like banks or credit-card payment gateways. DDoS concerns computer networks and CPU resource management.
Normally many persons, using many bots, attack high-profile Web {{Glossary("server", "servers")}} like banks or credit-card payment gateways. DDoS concerns computer networks and CPU resource management.

In a typical DDoS attack, the assailant begins by exploiting a vulnerability in one computer system and making it the DDoS master. The attack master, also known as the botmaster, identifies and infects other vulnerable systems with malware. Eventually, the assailant instructs the controlled machines to launch an attack against a specified target.

Expand All @@ -27,4 +27,4 @@ The United States Computer Emergency Readiness Team (US-CERT) defines symptoms o

## See also

- [Denial-of-service attack](https://en.wikipedia.org/wiki/Denial-of-service_attack) on Wikipedia
- [Distributed DoS attack](https://en.wikipedia.org/wiki/Denial-of-service_attack#Distributed_DoS) on Wikipedia
32 changes: 0 additions & 32 deletions files/en-us/glossary/dos_attack/index.md

This file was deleted.

2 changes: 1 addition & 1 deletion files/en-us/glossary/rate_limit/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ page-type: glossary-definition

{{GlossarySidebar}}

In computing, especially in networking, **rate limiting** means controlling how many operations can be performed in a given amount of time, usually to avoid overloading the system and causing performance degradation. For example, a server might limit the number of requests it will accept from a single client in a given time period, which not only optimizes the server's overall performance but also mitigates attacks like {{glossary("DoS attack")}}.
In computing, especially in networking, **rate limiting** means controlling how many operations can be performed in a given amount of time, usually to avoid overloading the system and causing performance degradation. For example, a server might limit the number of requests it will accept from a single client in a given time period, which not only optimizes the server's overall performance but also mitigates attacks like {{Glossary("Denial of Service", "DoS attack")}}.

Rate limiting is typically synonymous with {{glossary("throttle", "throttling")}}, although {{glossary("debounce", "debouncing")}} is another viable strategy which provides better semantics and user experience in certain cases.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Short-lived connections have two major hitches: the time taken to establish a ne

A persistent connection is one which remains open for a period of time, and can be reused for several requests, saving the need for a new TCP handshake, and utilizing TCP's performance enhancing capabilities. This connection will not stay open forever: idle connections are closed after some time (a server may use the {{HTTPHeader("Keep-Alive")}} header to specify a minimum time the connection should be kept open).

Persistent connections also have drawbacks; even when idling they consume server resources, and under heavy load, {{glossary("DoS attack", "DoS attacks")}} can be conducted. In such cases, using non-persistent connections, which are closed as soon as they are idle, can provide better performance.
Persistent connections also have drawbacks; even when idling they consume server resources, and under heavy load, {{Glossary("Denial of Service", "DoS attacks")}} can be conducted. In such cases, using non-persistent connections, which are closed as soon as they are idle, can provide better performance.

HTTP/1.0 connections are not persistent by default. Setting {{HTTPHeader("Connection")}} to anything other than `close`, usually `retry-after`, will make them persistent.

Expand All @@ -61,7 +61,7 @@ By default, [HTTP](/en-US/docs/Web/HTTP) requests are issued sequentially. The n

Pipelining is the process to send successive requests, over the same persistent connection, without waiting for the answer. This avoids latency of the connection. Theoretically, performance could also be improved if two HTTP requests were to be packed into the same TCP message. The typical [MSS](https://en.wikipedia.org/wiki/Maximum_segment_size) (Maximum Segment Size), is big enough to contain several simple requests, although the demand in size of HTTP requests continues to grow.

Not all types of HTTP requests can be pipelined: only {{glossary("idempotent")}} methods, that is {{HTTPMethod("GET")}}, {{HTTPMethod("HEAD")}}, {{HTTPMethod("PUT")}} and {{HTTPMethod("DELETE")}}, can be replayed safely. Should a failure happen, the pipeline content can be repeated.
Not all types of HTTP requests can be pipelined: only {{Glossary("idempotent")}} methods, that is {{HTTPMethod("GET")}}, {{HTTPMethod("HEAD")}}, {{HTTPMethod("PUT")}} and {{HTTPMethod("DELETE")}}, can be replayed safely. Should a failure happen, the pipeline content can be repeated.

Today, every HTTP/1.1-compliant proxy and server should support pipelining, though many have limitations in practice: a significant reason no modern browser activates this feature by default.

Expand All @@ -70,7 +70,7 @@ Today, every HTTP/1.1-compliant proxy and server should support pipelining, thou
> [!NOTE]
> Unless you have a very specific immediate need, don't use this deprecated technique; switch to HTTP/2 instead. In HTTP/2, domain sharding is no longer useful: the HTTP/2 connection is able to handle parallel unprioritized requests very well. Domain sharding is even detrimental to performance. Most HTTP/2 implementations use a technique called [connection coalescing](https://daniel.haxx.se/blog/2016/08/18/http2-connection-coalescing/) to revert eventual domain sharding.
As an HTTP/1.x connection is serializing requests, even without any ordering, it can't be optimal without large enough available bandwidth. As a solution, browsers open several connections to each domain, sending parallel requests. Default was once 2 to 3 connections, but this has now increased to a more common use of 6 parallel connections. There is a risk of triggering [DoS](/en-US/docs/Glossary/DOS_attack) protection on the server side if attempting more than this number.
As an HTTP/1.x connection is serializing requests, even without any ordering, it can't be optimal without large enough available bandwidth. As a solution, browsers open several connections to each domain, sending parallel requests. Default was once 2 to 3 connections, but this has now increased to a more common use of 6 parallel connections. There is a risk of triggering {{Glossary("Denial of Service", "DoS")}} protection on the server side if attempting more than this number.

If the server wishes a faster website or application response, it is possible for the server to force the opening of more connections. For example, instead of having all resources on the same domain, say `www.example.com`, it could split over several domains, `www1.example.com`, `www2.example.com`, `www3.example.com`. Each of these domains resolves to the _same_ server, and the Web browser will open 6 connections to each (in our example, boosting the connections to 18). This technique is called _domain sharding_.

Expand Down

0 comments on commit c13b7a0

Please sign in to comment.