锘??xml version="1.0" encoding="utf-8" standalone="yes"?>久久精品色图,西瓜成人精品人成网站,免费观看日韩avhttp://m.shnenglu.com/beautykingdom/category/12134.htmlzh-cnTue, 22 May 2012 05:14:40 GMTTue, 22 May 2012 05:14:40 GMT60Comparing Two High-Performance I/O Design Patterns<forward>http://m.shnenglu.com/beautykingdom/archive/2012/05/21/175576.htmlchatlerchatlerMon, 21 May 2012 03:24:00 GMThttp://m.shnenglu.com/beautykingdom/archive/2012/05/21/175576.htmlhttp://m.shnenglu.com/beautykingdom/comments/175576.htmlhttp://m.shnenglu.com/beautykingdom/archive/2012/05/21/175576.html#Feedback0http://m.shnenglu.com/beautykingdom/comments/commentRss/175576.htmlhttp://m.shnenglu.com/beautykingdom/services/trackbacks/175576.htmlby Alexander Libman with Vladimir Gilbourd
November 25, 2005 

Summary
This article investigates and compares different design patterns of high performance TCP-based servers. In addition to existing approaches, it proposes a scalable single-codebase, multi-platform solution (with code examples) and describes its fine-tuning on different platforms. It also compares performance of Java, C# and C++ implementations of proposed and existing solutions.

System I/O can be blocking, or non-blocking synchronous, or non-blocking asynchronous [12]. Blocking I/O means that the calling system does not return control to the caller until the operation is finished. As a result, the caller is blocked and cannot perform other activities during that time. Most important, the caller thread cannot be reused for other request processing while waiting for the I/O to complete, and becomes a wasted resource during that time. For example, aread() operation on a socket in blocking mode will not return control if the socket buffer is empty until some data becomes available.

By contrast, a non-blocking synchronous call returns control to the caller immediately. The caller is not made to wait, and the invoked system immediately returns one of two responses: If the call was executed and the results are ready, then the caller is told of that. Alternatively, the invoked system can tell the caller that the system has no resources (no data in the socket) to perform the requested action. In that case, it is the responsibility of the caller may repeat the call until it succeeds. For example, a read() operation on a socket in non-blocking mode may return the number of read bytes or a special return code -1 with errno set to EWOULBLOCK/EAGAIN, meaning "not ready; try again later."

In a non-blocking asynchronous call, the calling function returns control to the caller immediately, reporting that the requested action was started. The calling system will execute the caller's request using additional system resources/threads and will notify the caller (by callback for example), when the result is ready for processing. For example, a Windows ReadFile() or POSIX aio_read() API returns immediately and initiates an internal system read operation. Of the three approaches, this non-blocking asynchronous approach offers the best scalability and performance.

This article investigates different non-blocking I/O multiplexing mechanisms and proposes a single multi-platform design pattern/solution. We hope that this article will help developers of high performance TCP based servers to choose optimal design solution. We also compare the performance of Java, C# and C++ implementations of proposed and existing solutions. We will exclude the blocking approach from further discussion and comparison at all, as it the least effective approach for scalability and performance.

Reactor and Proactor: two I/O multiplexing approaches

In general, I/O multiplexing mechanisms rely on an event demultiplexor [13], an object that dispatches I/O events from a limited number of sources to the appropriate read/write event handlers. The developer registers interest in specific events and provides event handlers, or callbacks. The event demultiplexor delivers the requested events to the event handlers.

Two patterns that involve event demultiplexors are called Reactor and Proactor [1]. The Reactor patterns involve synchronous I/O, whereas the Proactor pattern involves asynchronous I/O. In Reactor, the event demultiplexor waits for events that indicate when a file descriptor or socket is ready for a read or write operation. The demultiplexor passes this event to the appropriate handler, which is responsible for performing the actual read or write.

In the Proactor pattern, by contrast, the handler—or the event demultiplexor on behalf of the handler—initiates asynchronous read and write operations. The I/O operation itself is performed by the operating system (OS). The parameters passed to the OS include the addresses of user-defined data buffers from which the OS gets data to write, or to which the OS puts data read. The event demultiplexor waits for events that indicate the completion of the I/O operation, and forwards those events to the appropriate handlers. For example, on Windows a handler could initiate async I/O (overlapped in Microsoft terminology) operations, and the event demultiplexor could wait for IOCompletion events [1]. The implementation of this classic asynchronous pattern is based on an asynchronous OS-level API, and we will call this implementation the "system-level" or "true" async, because the application fully relies on the OS to execute actual I/O.

An example will help you understand the difference between Reactor and Proactor. We will focus on the read operation here, as the write implementation is similar. Here's a read in Reactor:

  • An event handler declares interest in I/O events that indicate readiness for read on a particular socket
  • The event demultiplexor waits for events
  • An event comes in and wakes-up the demultiplexor, and the demultiplexor calls the appropriate handler
  • The event handler performs the actual read operation, handles the data read, declares renewed interest in I/O events, and returns control to the dispatcher

By comparison, here is a read operation in Proactor (true async):

  • A handler initiates an asynchronous read operation (note: the OS must support asynchronous I/O). In this case, the handler does not care about I/O readiness events, but is instead registers interest in receiving completion events.
  • The event demultiplexor waits until the operation is completed
  • While the event demultiplexor waits, the OS executes the read operation in a parallel kernel thread, puts data into a user-defined buffer, and notifies the event demultiplexor that the read is complete
  • The event demultiplexor calls the appropriate handler;
  • The event handler handles the data from user defined buffer, starts a new asynchronous operation, and returns control to the event demultiplexor.

Current practice

The open-source C++ development framework ACE [13] developed by Douglas Schmidt, et al., offers a wide range of platform-independent, low-level concurrency support classes (threading, mutexes, etc). On the top level it provides two separate groups of classes: implementations of the ACE Reactor and ACE Proactor. Although both of them are based on platform-independent primitives, these tools offer different interfaces.

The ACE Proactor gives much better performance and robustness on MS-Windows, as Windows provides a very efficient async API, based on operating-system-level support [45].

Unfortunately, not all operating systems provide full robust async OS-level support. For instance, many Unix systems do not. Therefore, ACE Reactor is a preferable solution in UNIX (currently UNIX does not have robust async facilities for sockets). As a result, to achieve the best performance on each system, developers of networked applications need to maintain two separate code-bases: an ACE Proactor based solution on Windows and an ACE Reactor based solution for Unix-based systems.

As we mentioned, the true async Proactor pattern requires operating-system-level support. Due to the differing nature of event handler and operating-system interaction, it is difficult to create common, unified external interfaces for both Reactor and Proactor patterns. That, in turn, makes it hard to create a fully portable development framework and encapsulate the interface and OS- related differences.

Proposed solution

In this section, we will propose a solution to the challenge of designing a portable framework for the Proactor and Reactor I/O patterns. To demonstrate this solution, we will transform a Reactor demultiplexor I/O solution to an emulated async I/O by moving read/write operations from event handlers inside the demultiplexor (this is "emulated async" approach). The following example illustrates that conversion for a read operation:

  • An event handler declares interest in I/O events (readiness for read) and provides the demultiplexor with information such as the address of a data buffer, or the number of bytes to read.
  • Dispatcher waits for events (for example, on select());
  • When an event arrives, it awakes up the dispatcher. The dispatcher performs a non- blocking read operation (it has all necessary information to perform this operation) and on completion calls the appropriate handler.
  • The event handler handles data from the user-defined buffer, declares new interest, along with information about where to put the data buffer and the number bytes to read in I/O events. The event handler then returns control to the dispatcher.

As we can see, by adding functionality to the demultiplexor I/O pattern, we were able to convert the Reactor pattern to a Proactor pattern. In terms of the amount of work performed, this approach is exactly the same as the Reactor pattern. We simply shifted responsibilities between different actors. There is no performance degradation because the amount of work performed is still the same. The work was simply performed by different actors. The following lists of steps demonstrate that each approach performs an equal amount of work:

Standard/classic Reactor:

  • Step 1) wait for event (Reactor job)
  • Step 2) dispatch "Ready-to-Read" event to user handler ( Reactor job)
  • Step 3) read data (user handler job)
  • Step 4) process data ( user handler job)

Proposed emulated Proactor:

  • Step 1) wait for event (Proactor job)
  • Step 2) read data (now Proactor job)
  • Step 3) dispatch "Read-Completed" event to user handler (Proactor job)
  • Step 4) process data (user handler job)

With an operating system that does not provide an async I/O API, this approach allows us to hide the reactive nature of available socket APIs and to expose a fully proactive async interface. This allows us to create a fully portable platform-independent solution with a common external interface.

TProactor

The proposed solution (TProactor) was developed and implemented at Terabit P/L [6]. The solution has two alternative implementations, one in C++ and one in Java. The C++ version was built using ACE cross-platform low-level primitives and has a common unified async proactive interface on all platforms.

The main TProactor components are the Engine and WaitStrategy interfaces. Engine manages the async operations lifecycle. WaitStrategy manages concurrency strategies. WaitStrategy depends on Engine and the two always work in pairs. Interfaces between Engine and WaitStrategy are strongly defined.

Engines and waiting strategies are implemented as pluggable class-drivers (for the full list of all implemented Engines and corresponding WaitStrategies, see Appendix 1). TProactor is a highly configurable solution. It internally implements three engines (POSIX AIO, SUN AIO and Emulated AIO) and hides six different waiting strategies, based on an asynchronous kernel API (for POSIX- this is not efficient right now due to internal POSIX AIO API problems) and synchronous Unix select()poll(), /dev/poll (Solaris 5.8+), port_get (Solaris 5.10), RealTime (RT) signals (Linux 2.4+), epoll (Linux 2.6), k-queue (FreeBSD) APIs. TProactor conforms to the standard ACE Proactor implementation interface. That makes it possible to develop a single cross-platform solution (POSIX/MS-WINDOWS) with a common (ACE Proactor) interface.

With a set of mutually interchangeable "lego-style" Engines and WaitStrategies, a developer can choose the appropriate internal mechanism (engine and waiting strategy) at run time by setting appropriate configuration parameters. These settings may be specified according to specific requirements, such as the number of connections, scalability, and the targeted OS. If the operating system supports async API, a developer may use the true async approach, otherwise the user can opt for an emulated async solutions built on different sync waiting strategies. All of those strategies are hidden behind an emulated async façade.

For an HTTP server running on Sun Solaris, for example, the /dev/poll or port_get()-based engines is the most suitable choice, able to serve huge number of connections, but for another UNIX solution with a limited number of connections but high throughput requirements, aselect()-based engine may be a better approach. Such flexibility cannot be achieved with a standard ACE Reactor/Proactor, due to inherent algorithmic problems of different wait strategies (see Appendix 2).

In terms of performance, our tests show that emulating from reactive to proactive does not impose any overhead—it can be faster, but not slower. According to our test results, the TProactor gives on average of up to 10-35 % better performance (measured in terms of both throughput and response times) than the reactive model in the standard ACE Reactor implementation on various UNIX/Linux platforms. On Windows it gives the same performance as standard ACE Proactor.

Performance comparison (JAVA versus C++ versus C#).

In addition to C++, as we also implemented TProactor in Java. As for JDK version 1.4, Java provides only the sync-based approach that is logically similar to C select() [78]. Java TProactor is based on Java's non-blocking facilities (java.nio packages) logically similar to C++ TProactor with waiting strategy based on select().

Figures 1 and 2 chart the transfer rate in bits/sec versus the number of connections. These charts represent comparison results for a simple echo-server built on standard ACE Reactor, using RedHat Linux 9.0, TProactor C++ and Java (IBM 1.4JVM) on Microsoft's Windows and RedHat Linux9.0, and a C# echo-server running on the Windows operating system. Performance of native AIO APIs is represented by "Async"-marked curves; by emulated AIO (TProactor)—AsyncE curves; and by TP_Reactor—Synch curves. All implementations were bombarded by the same client application—a continuous stream of arbitrary fixed sized messages via N connections.

The full set of tests was performed on the same hardware. Tests on different machines proved that relative results are consistent.

Figure 1. Windows XP/P4 2.6GHz HyperThreading/512 MB RAM.
Figure 2. Linux RedHat 2.4.20-smp/P4 2.6GHz HyperThreading/512 MB RAM.

User code example

The following is the skeleton of a simple TProactor-based Java echo-server. In a nutshell, the developer only has to implement the two interfaces: OpRead with buffer where TProactor puts its read results, and OpWrite with a buffer from which TProactor takes data. The developer will also need to implement protocol-specific logic via providing callbacks onReadCompleted() and onWriteCompleted() in the AsynchHandler interface implementation. Those callbacks will be asynchronously called by TProactor on completion of read/write operations and executed on a thread pool space provided by TProactor (the developer doesn't need to write his own pool).

class EchoServerProtocol implements AsynchHandler
{
    AsynchChannel achannel = null;
    EchoServerProtocol(Demultiplexor m, SelectableChannel channel) throws Exception 
    {
        this.achannel = new AsynchChannel( m, this, channel );
    }

public void start() throws Exception
{
// called after construction 
System.out.println( Thread.currentThread().getName() + ": EchoServer protocol started" ); 
        achannel.read( buffer);
}

public void onReadCompleted( OpRead opRead ) throws Exception
{
if (opRead.getError() != null )
{
    // handle error, do clean-up if needed  
System.out.println( "EchoServer::readCompleted: " + opRead.getError().toString());
achannel.close();
return;
}
if (opRead.getBytesCompleted () <= 0)
{
System.out.println( "EchoServer::readCompleted: Peer closed " + opRead.getBytesCompleted();
achannel.close();
return;
}

ByteBuffer buffer = opRead.getBuffer();
achannel.write(buffer);
}
public void onWriteCompleted(OpWrite opWrite) throws Exception 
{
// logically similar to onReadCompleted         ...     
}
};

IOHandler is a TProactor base class. AsynchHandler and Multiplexor, among other things, internally execute the wait strategy chosen by the developer.

Conclusion

TProactor provides a common, flexible, and configurable solution for multi-platform high- performance communications development. All of the problems and complexities mentioned in Appendix 2, are hidden from the developer.

It is clear from the charts that C++ is still the preferable approach for high performance communication solutions, but Java on Linux comes quite close. However, the overall Java performance was weakened by poor results on Windows. One reason for that may be that the Java 1.4 nio package is based on select()-style API. 錕?It is true, Java NIO package is kind of Reactor pattern based on select()-style API (see [78]). Java NIO allows to write your own select()-style provider (equivalent of TProactor waiting strategies). Looking at Java NIO implementation for Windows (to do this enough to examine import symbols in jdk1.5.0\jre\bin\nio.dll), we can make a conclusion that Java NIO 1.4.2 and 1.5.0 for Windows is based on WSAEventSelect () API. That is better than select(), but slower than IOCompletionPort錕絪 for significant number of connections. . Should the 1.5 version of Java's nio be based on IOCompletionPorts, then that should improve performance. If Java NIO would use IOCompletionPorts, than conversion of Proactor pattern to Reactor pattern should be made inside nio.dll. Although such conversion is more complicated than Reactor- >Proactor conversion, but it can be implemented in frames of Java NIO interfaces. (this the topic of next arcticle, but we can provide algorithm). At this time, no TProactor performance tests were done on JDK 1.5.

Note. All tests for Java are performed on "raw" buffers (java.nio.ByteBuffer) without data processing.

Taking into account the latest activities to develop robust AIO on Linux [9], we can conclude that Linux Kernel API (io_xxxx set of system calls) should be more scalable in comparison with POSIX standard, but still not portable. In this case, TProactor with new Engine/Wait Strategy pair, based on native LINUX AIO can be easily implemented to overcome portability issues and to cover Linux native AIO with standard ACE Proactor interface.

Appendix I

Engines and waiting strategies implemented in TProactor

 

Engine TypeWait StrategiesOperating System
POSIX_AIO (true async)
aio_read()/aio_write()
aio_suspend()
Waiting for RT signal
Callback function
POSIX complained UNIX (not robust)
POSIX (not robust)
SGI IRIX, LINUX (not robust)
SUN_AIO (true async)
aio_read()/aio_write()
aio_wait()SUN (not robust)
Emulated Async
Non-blocking read()/write()
select()
poll()
/dev/poll
Linux RT signals
Kqueue
generic POSIX
Mostly all POSIX implementations
SUN
Linux
FreeBSD

Appendix II

All sync waiting strategies can be divided into two groups:

  • edge-triggered (e.g. Linux RT signals)—signal readiness only when socket became ready (changes state);
  • level-triggered (e.g. select()poll(), /dev/poll)—readiness at any time.

Let us describe some common logical problems for those groups:

  • edge-triggered group: after executing I/O operation, the demultiplexing loop can lose the state of socket readiness. Example: the "read" handler did not read whole chunk of data, so the socket remains still ready for read. But the demultiplexor loop will not receive next notification.
  • level-triggered group: when demultiplexor loop detects readiness, it starts the write/read user defined handler. But before the start, it should remove socket descriptior from the set of monitored descriptors. Otherwise, the same event can be dispatched twice.
  • Obviously, solving these problems adds extra complexities to development. All these problems were resolved internally within TProactor and the developer should not worry about those details, while in the synch approach one needs to apply extra effort to resolve them.

Resources

[1] Douglas C. Schmidt, Stephen D. Huston "C++ Network Programming." 2002, Addison-Wesley ISBN 0-201-60464-7

[2] W. Richard Stevens "UNIX Network Programming" vol. 1 and 2, 1999, Prentice Hill, ISBN 0-13- 490012-X 

[3] Douglas C. Schmidt, Michael Stal, Hans Rohnert, Frank Buschmann "Pattern-Oriented Software Architecture: Patterns for Concurrent and Networked Objects, Volume 2" Wiley & Sons, NY 2000

[4] INFO: Socket Overlapped I/O Versus Blocking/Non-blocking Mode. Q181611. Microsoft Knowledge Base Articles.

[5] Microsoft MSDN. I/O Completion Ports.
http://msdn.microsoft.com/library/default.asp?url=/library/en- us/fileio/fs/i_o_completion_ports.asp

[6] TProactor (ACE compatible Proactor).
www.terabit.com.au

[7] JavaDoc java.nio.channels
http://java.sun.com/j2se/1.4.2/docs/api/java/nio/channels/package-summary.html

[8] JavaDoc Java.nio.channels.spi Class SelectorProvider 
http://java.sun.com/j2se/1.4.2/docs/api/java/nio/channels/spi/SelectorProvider.html

[9] Linux AIO development 
http://lse.sourceforge.net/io/aio.html, and
http://archive.linuxsymposium.org/ols2003/Proceedings/All-Reprints/Reprint-Pulavarty-OLS2003.pdf

See Also:

Ian Barile "I/O Multiplexing & Scalable Socket Servers", 2004 February, DDJ 

Further reading on event handling
- http://www.cs.wustl.edu/~schmidt/ACE-papers.html

The Adaptive Communication Environment
http://www.cs.wustl.edu/~schmidt/ACE.html

Terabit Solutions
http://terabit.com.au/solutions.php

About the authors

Alex Libman has been programming for 15 years. During the past 5 years his main area of interest is pattern-oriented multiplatform networked programming using C++ and Java. He is big fan and contributor of ACE.

Vlad Gilbourd works as a computer consultant, but wishes to spend more time listening jazz :) As a hobby, he started and runswww.corporatenews.com.au website.



from:
http://www.artima.com/articles/io_design_patterns.html

 



chatler 2012-05-21 11:24 鍙戣〃璇勮
]]>
Comparing Two High-Performance I/O Design Patternshttp://m.shnenglu.com/beautykingdom/archive/2010/09/08/126175.htmlchatlerchatlerWed, 08 Sep 2010 09:20:00 GMThttp://m.shnenglu.com/beautykingdom/archive/2010/09/08/126175.htmlhttp://m.shnenglu.com/beautykingdom/comments/126175.htmlhttp://m.shnenglu.com/beautykingdom/archive/2010/09/08/126175.html#Feedback0http://m.shnenglu.com/beautykingdom/comments/commentRss/126175.htmlhttp://m.shnenglu.com/beautykingdom/services/trackbacks/126175.html
Summary
This article investigates and compares different design patterns of high performance TCP-based servers. In addition to existing approaches, it proposes a scalable single-codebase, multi-platform solution (with code examples) and describes its fine-tuning on different platforms. It also compares performance of Java, C# and C++ implementations of proposed and existing solutions.

System I/O can be blocking, or non-blocking synchronous, or non-blocking asynchronous [12]. Blocking I/O means that the calling system does not return control to the caller until the operation is finished. As a result, the caller is blocked and cannot perform other activities during that time. Most important, the caller thread cannot be reused for other request processing while waiting for the I/O to complete, and becomes a wasted resource during that time. For example, a read() operation on a socket in blocking mode will not return control if the socket buffer is empty until some data becomes available.

By contrast, a non-blocking synchronous call returns control to the caller immediately. The caller is not made to wait, and the invoked system immediately returns one of two responses: If the call was executed and the results are ready, then the caller is told of that. Alternatively, the invoked system can tell the caller that the system has no resources (no data in the socket) to perform the requested action. In that case, it is the responsibility of the caller may repeat the call until it succeeds. For example, a read() operation on a socket in non-blocking mode may return the number of read bytes or a special return code -1 with errno set to EWOULBLOCK/EAGAIN, meaning "not ready; try again later."

In a non-blocking asynchronous call, the calling function returns control to the caller immediately, reporting that the requested action was started. The calling system will execute the caller's request using additional system resources/threads and will notify the caller (by callback for example), when the result is ready for processing. For example, a Windows ReadFile() or POSIX aio_read() API returns immediately and initiates an internal system read operation. Of the three approaches, this non-blocking asynchronous approach offers the best scalability and performance.

This article investigates different non-blocking I/O multiplexing mechanisms and proposes a single multi-platform design pattern/solution. We hope that this article will help developers of high performance TCP based servers to choose optimal design solution. We also compare the performance of Java, C# and C++ implementations of proposed and existing solutions. We will exclude the blocking approach from further discussion and comparison at all, as it the least effective approach for scalability and performance.

In general, I/O multiplexing mechanisms rely on an event demultiplexor [13], an object that dispatches I/O events from a limited number of sources to the appropriate read/write event handlers. The developer registers interest in specific events and provides event handlers, or callbacks. The event demultiplexor delivers the requested events to the event handlers.

Two patterns that involve event demultiplexors are called Reactor and Proactor [1]. The Reactor patterns involve synchronous I/O, whereas the Proactor pattern involves asynchronous I/O. In Reactor, the event demultiplexor waits for events that indicate when a file descriptor or socket is ready for a read or write operation. The demultiplexor passes this event to the appropriate handler, which is responsible for performing the actual read or write.

In the Proactor pattern, by contrast, the handler鈥攐r the event demultiplexor on behalf of the handler鈥攊nitiates asynchronous read and write operations. The I/O operation itself is performed by the operating system (OS). The parameters passed to the OS include the addresses of user-defined data buffers from which the OS gets data to write, or to which the OS puts data read. The event demultiplexor waits for events that indicate the completion of the I/O operation, and forwards those events to the appropriate handlers. For example, on Windows a handler could initiate async I/O (overlapped in Microsoft terminology) operations, and the event demultiplexor could wait for IOCompletion events [1]. The implementation of this classic asynchronous pattern is based on an asynchronous OS-level API, and we will call this implementation the "system-level" or "true" async, because the application fully relies on the OS to execute actual I/O.

An example will help you understand the difference between Reactor and Proactor. We will focus on the read operation here, as the write implementation is similar. Here's a read in Reactor:

  • An event handler declares interest in I/O events that indicate readiness for read on a particular socket
  • The event demultiplexor waits for events
  • An event comes in and wakes-up the demultiplexor, and the demultiplexor calls the appropriate handler
  • The event handler performs the actual read operation, handles the data read, declares renewed interest in I/O events, and returns control to the dispatcher

By comparison, here is a read operation in Proactor (true async):

  • A handler initiates an asynchronous read operation (note: the OS must support asynchronous I/O). In this case, the handler does not care about I/O readiness events, but is instead registers interest in receiving completion events.
  • The event demultiplexor waits until the operation is completed
  • While the event demultiplexor waits, the OS executes the read operation in a parallel kernel thread, puts data into a user-defined buffer, and notifies the event demultiplexor that the read is complete
  • The event demultiplexor calls the appropriate handler;
  • The event handler handles the data from user defined buffer, starts a new asynchronous operation, and returns control to the event demultiplexor.

Current practice

The open-source C++ development framework ACE [13] developed by Douglas Schmidt, et al., offers a wide range of platform-independent, low-level concurrency support classes (threading, mutexes, etc). On the top level it provides two separate groups of classes: implementations of the ACE Reactor and ACE Proactor. Although both of them are based on platform-independent primitives, these tools offer different interfaces.

The ACE Proactor gives much better performance and robustness on MS-Windows, as Windows provides a very efficient async API, based on operating-system-level support [45].

Unfortunately, not all operating systems provide full robust async OS-level support. For instance, many Unix systems do not. Therefore, ACE Reactor is a preferable solution in UNIX (currently UNIX does not have robust async facilities for sockets). As a result, to achieve the best performance on each system, developers of networked applications need to maintain two separate code-bases: an ACE Proactor based solution on Windows and an ACE Reactor based solution for Unix-based systems.

As we mentioned, the true async Proactor pattern requires operating-system-level support. Due to the differing nature of event handler and operating-system interaction, it is difficult to create common, unified external interfaces for both Reactor and Proactor patterns. That, in turn, makes it hard to create a fully portable development framework and encapsulate the interface and OS- related differences.

Proposed solution

In this section, we will propose a solution to the challenge of designing a portable framework for the Proactor and Reactor I/O patterns. To demonstrate this solution, we will transform a Reactor demultiplexor I/O solution to an emulated async I/O by moving read/write operations from event handlers inside the demultiplexor (this is "emulated async" approach). The following example illustrates that conversion for a read operation:

  • An event handler declares interest in I/O events (readiness for read) and provides the demultiplexor with information such as the address of a data buffer, or the number of bytes to read.
  • Dispatcher waits for events (for example, on select());
  • When an event arrives, it awakes up the dispatcher. The dispatcher performs a non- blocking read operation (it has all necessary information to perform this operation) and on completion calls the appropriate handler.
  • The event handler handles data from the user-defined buffer, declares new interest, along with information about where to put the data buffer and the number bytes to read in I/O events. The event handler then returns control to the dispatcher.

As we can see, by adding functionality to the demultiplexor I/O pattern, we were able to convert the Reactor pattern to a Proactor pattern. In terms of the amount of work performed, this approach is exactly the same as the Reactor pattern. We simply shifted responsibilities between different actors. There is no performance degradation because the amount of work performed is still the same. The work was simply performed by different actors. The following lists of steps demonstrate that each approach performs an equal amount of work:

Standard/classic Reactor:

  • Step 1) wait for event (Reactor job)
  • Step 2) dispatch "Ready-to-Read" event to user handler ( Reactor job)
  • Step 3) read data (user handler job)
  • Step 4) process data ( user handler job)

Proposed emulated Proactor:

  • Step 1) wait for event (Proactor job)
  • Step 2) read data (now Proactor job)
  • Step 3) dispatch "Read-Completed" event to user handler (Proactor job)
  • Step 4) process data (user handler job)

With an operating system that does not provide an async I/O API, this approach allows us to hide the reactive nature of available socket APIs and to expose a fully proactive async interface. This allows us to create a fully portable platform-independent solution with a common external interface.

TProactor

The proposed solution (TProactor) was developed and implemented at Terabit P/L [6]. The solution has two alternative implementations, one in C++ and one in Java. The C++ version was built using ACE cross-platform low-level primitives and has a common unified async proactive interface on all platforms.

The main TProactor components are the Engine and WaitStrategy interfaces. Engine manages the async operations lifecycle. WaitStrategy manages concurrency strategies. WaitStrategy depends on Engine and the two always work in pairs. Interfaces between Engine and WaitStrategy are strongly defined.

Engines and waiting strategies are implemented as pluggable class-drivers (for the full list of all implemented Engines and corresponding WaitStrategies, see Appendix 1). TProactor is a highly configurable solution. It internally implements three engines (POSIX AIO, SUN AIO and Emulated AIO) and hides six different waiting strategies, based on an asynchronous kernel API (for POSIX- this is not efficient right now due to internal POSIX AIO API problems) and synchronous Unix select()poll(), /dev/poll (Solaris 5.8+), port_get (Solaris 5.10), RealTime (RT) signals (Linux 2.4+), epoll (Linux 2.6), k-queue (FreeBSD) APIs. TProactor conforms to the standard ACE Proactor implementation interface. That makes it possible to develop a single cross-platform solution (POSIX/MS-WINDOWS) with a common (ACE Proactor) interface.

With a set of mutually interchangeable "lego-style" Engines and WaitStrategies, a developer can choose the appropriate internal mechanism (engine and waiting strategy) at run time by setting appropriate configuration parameters. These settings may be specified according to specific requirements, such as the number of connections, scalability, and the targeted OS. If the operating system supports async API, a developer may use the true async approach, otherwise the user can opt for an emulated async solutions built on different sync waiting strategies. All of those strategies are hidden behind an emulated async façade.

For an HTTP server running on Sun Solaris, for example, the /dev/poll or port_get()-based engines is the most suitable choice, able to serve huge number of connections, but for another UNIX solution with a limited number of connections but high throughput requirements, a select()-based engine may be a better approach. Such flexibility cannot be achieved with a standard ACE Reactor/Proactor, due to inherent algorithmic problems of different wait strategies (see Appendix 2).

In terms of performance, our tests show that emulating from reactive to proactive does not impose any overhead鈥攊t can be faster, but not slower. According to our test results, the TProactor gives on average of up to 10-35 % better performance (measured in terms of both throughput and response times) than the reactive model in the standard ACE Reactor implementation on various UNIX/Linux platforms. On Windows it gives the same performance as standard ACE Proactor.

Performance comparison (JAVA versus C++ versus C#).

In addition to C++, as we also implemented TProactor in Java. As for JDK version 1.4, Java provides only the sync-based approach that is logically similar to C select() [78]. Java TProactor is based on Java's non-blocking facilities (java.nio packages) logically similar to C++ TProactor with waiting strategy based on select().

Figures 1 and 2 chart the transfer rate in bits/sec versus the number of connections. These charts represent comparison results for a simple echo-server built on standard ACE Reactor, using RedHat Linux 9.0, TProactor C++ and Java (IBM 1.4JVM) on Microsoft's Windows and RedHat Linux9.0, and a C# echo-server running on the Windows operating system. Performance of native AIO APIs is represented by "Async"-marked curves; by emulated AIO (TProactor)鈥擜syncE curves; and by TP_Reactor鈥擲ynch curves. All implementations were bombarded by the same client application鈥攁 continuous stream of arbitrary fixed sized messages via N connections.

The full set of tests was performed on the same hardware. Tests on different machines proved that relative results are consistent.

Figure 1. Windows XP/P4 2.6GHz HyperThreading/512 MB RAM.
Figure 2. Linux RedHat 2.4.20-smp/P4 2.6GHz HyperThreading/512 MB RAM.

User code example

The following is the skeleton of a simple TProactor-based Java echo-server. In a nutshell, the developer only has to implement the two interfaces:OpRead with buffer where TProactor puts its read results, and OpWrite with a buffer from which TProactor takes data. The developer will also need to implement protocol-specific logic via providing callbacks onReadCompleted() and onWriteCompleted() in the AsynchHandlerinterface implementation. Those callbacks will be asynchronously called by TProactor on completion of read/write operations and executed on a thread pool space provided by TProactor (the developer doesn't need to write his own pool).

class EchoServerProtocol implements AsynchHandler
{

    AsynchChannel achannel = null;

    EchoServerProtocol( Demultiplexor m,  SelectableChannel channel ) throws Exception
    {
        this.achannel = new AsynchChannel( m, this, channel );
    }

    public void start() throws Exception
    {
        // called after construction
        System.out.println( Thread.currentThread().getName() + ": EchoServer protocol started" );
        achannel.read( buffer);
    }

    public void onReadCompleted( OpRead opRead ) throws Exception
    {
        if ( opRead.getError() != null )
        {
            // handle error, do clean-up if needed
 System.out.println( "EchoServer::readCompleted: " + opRead.getError().toString());
            achannel.close();
            return;
        }

        if ( opRead.getBytesCompleted () <= 0)
        {
            System.out.println( "EchoServer::readCompleted: Peer closed " + opRead.getBytesCompleted();
            achannel.close();
            return;
        }

        ByteBuffer buffer = opRead.getBuffer();

        achannel.write(buffer);
    }

    public void onWriteCompleted(OpWrite opWrite) throws Exception
    {
        // logically similar to onReadCompleted
        ...
    }
}

IOHandler is a TProactor base class. AsynchHandler and Multiplexor, among other things, internally execute the wait strategy chosen by the developer.

Conclusion

TProactor provides a common, flexible, and configurable solution for multi-platform high- performance communications development. All of the problems and complexities mentioned in Appendix 2, are hidden from the developer.

It is clear from the charts that C++ is still the preferable approach for high performance communication solutions, but Java on Linux comes quite close. However, the overall Java performance was weakened by poor results on Windows. One reason for that may be that the Java 1.4 nio package is based on select()-style API. 錕?It is true, Java NIO package is kind of Reactor pattern based on select()-style API (see [78]). Java NIO allows to write your own select()-style provider (equivalent of TProactor waiting strategies). Looking at Java NIO implementation for Windows (to do this enough to examine import symbols in jdk1.5.0\jre\bin\nio.dll), we can make a conclusion that Java NIO 1.4.2 and 1.5.0 for Windows is based on WSAEventSelect () API. That is better than select(), but slower than IOCompletionPort錕絪 for significant number of connections. . Should the 1.5 version of Java's nio be based on IOCompletionPorts, then that should improve performance. If Java NIO would use IOCompletionPorts, than conversion of Proactor pattern to Reactor pattern should be made inside nio.dll. Although such conversion is more complicated than Reactor- >Proactor conversion, but it can be implemented in frames of Java NIO interfaces. (this the topic of next arcticle, but we can provide algorithm). At this time, no TProactor performance tests were done on JDK 1.5.

Note. All tests for Java are performed on "raw" buffers (java.nio.ByteBuffer) without data processing.

Taking into account the latest activities to develop robust AIO on Linux [9], we can conclude that Linux Kernel API (io_xxxx set of system calls) should be more scalable in comparison with POSIX standard, but still not portable. In this case, TProactor with new Engine/Wait Strategy pair, based on native LINUX AIO can be easily implemented to overcome portability issues and to cover Linux native AIO with standard ACE Proactor interface.

Appendix I

Engines and waiting strategies implemented in TProactor

 

Engine TypeWait StrategiesOperating System
POSIX_AIO (true async)
aio_read()/aio_write()
aio_suspend()
Waiting for RT signal
Callback function
POSIX complained UNIX (not robust)
POSIX (not robust)
SGI IRIX, LINUX (not robust)
SUN_AIO (true async)
aio_read()/aio_write()
aio_wait()SUN (not robust)
Emulated Async
Non-blocking read()/write()
select()
poll()
/dev/poll
Linux RT signals
Kqueue
generic POSIX
Mostly all POSIX implementations
SUN
Linux
FreeBSD

Appendix II

All sync waiting strategies can be divided into two groups:

  • edge-triggered (e.g. Linux RT signals)鈥攕ignal readiness only when socket became ready (changes state);
  • level-triggered (e.g. select()poll(), /dev/poll)鈥攔eadiness at any time.

Let us describe some common logical problems for those groups:

  • edge-triggered group: after executing I/O operation, the demultiplexing loop can lose the state of socket readiness. Example: the "read" handler did not read whole chunk of data, so the socket remains still ready for read. But the demultiplexor loop will not receive next notification.
  • level-triggered group: when demultiplexor loop detects readiness, it starts the write/read user defined handler. But before the start, it should remove socket descriptior from the set of monitored descriptors. Otherwise, the same event can be dispatched twice.
  • Obviously, solving these problems adds extra complexities to development. All these problems were resolved internally within TProactor and the developer should not worry about those details, while in the synch approach one needs to apply extra effort to resolve them.

Resources

[1] Douglas C. Schmidt, Stephen D. Huston "C++ Network Programming." 2002, Addison-Wesley ISBN 0-201-60464-7

[2] W. Richard Stevens "UNIX Network Programming" vol. 1 and 2, 1999, Prentice Hill, ISBN 0-13- 490012-X 

[3] Douglas C. Schmidt, Michael Stal, Hans Rohnert, Frank Buschmann "Pattern-Oriented Software Architecture: Patterns for Concurrent and Networked Objects, Volume 2" Wiley & Sons, NY 2000

[4] INFO: Socket Overlapped I/O Versus Blocking/Non-blocking Mode. Q181611. Microsoft Knowledge Base Articles.

[5] Microsoft MSDN. I/O Completion Ports.
http://msdn.microsoft.com/library/default.asp?url=/library/en- us/fileio/fs/i_o_completion_ports.asp

[6] TProactor (ACE compatible Proactor).
www.terabit.com.au

[7] JavaDoc java.nio.channels
http://java.sun.com/j2se/1.4.2/docs/api/java/nio/channels/package-summary.html

[8] JavaDoc Java.nio.channels.spi Class SelectorProvider 
http://java.sun.com/j2se/1.4.2/docs/api/java/nio/channels/spi/SelectorProvider.html

[9] Linux AIO development 
http://lse.sourceforge.net/io/aio.html, and
http://archive.linuxsymposium.org/ols2003/Proceedings/All-Reprints/Reprint-Pulavarty-OLS2003.pdf

See Also:

Ian Barile "I/O Multiplexing & Scalable Socket Servers", 2004 February, DDJ 

Further reading on event handling
- http://www.cs.wustl.edu/~schmidt/ACE-papers.html

The Adaptive Communication Environment
http://www.cs.wustl.edu/~schmidt/ACE.html

Terabit Solutions
http://terabit.com.au/solutions.php

About the authors

Alex Libman has been programming for 15 years. During the past 5 years his main area of interest is pattern-oriented multiplatform networked programming using C++ and Java. He is big fan and contributor of ACE.

Vlad Gilbourd works as a computer consultant, but wishes to spend more time listening jazz :) As a hobby, he started and runswww.corporatenews.com.au website.

from:

http://www.artima.com/articles/io_design_patterns.html



chatler 2010-09-08 17:20 鍙戣〃璇勮
]]>
涓涓熀浜庡畬鎴愮鍙g殑TCP Server Framework,嫻呮瀽IOCPhttp://m.shnenglu.com/beautykingdom/archive/2010/08/25/124731.htmlchatlerchatlerWed, 25 Aug 2010 12:42:00 GMThttp://m.shnenglu.com/beautykingdom/archive/2010/08/25/124731.htmlhttp://m.shnenglu.com/beautykingdom/comments/124731.htmlhttp://m.shnenglu.com/beautykingdom/archive/2010/08/25/124731.html#Feedback0http://m.shnenglu.com/beautykingdom/comments/commentRss/124731.htmlhttp://m.shnenglu.com/beautykingdom/services/trackbacks/124731.html濡傛灉浣犱笉鎶曢掞紙POST錛塐verlapped I/O錛岄偅涔圛/O Completion Ports 鍙兘涓轟綘鎻愪緵涓涓猀ueue. 
    CreateIoCompletionPort鐨凬umberOfConcurrentThreads錛?/span>
1.鍙湁褰撶浜屼釜鍙傛暟ExistingCompletionPort涓篘ULL鏃跺畠鎵嶆湁鏁堬紝瀹冩槸涓猰ax threads limits.
2.澶у鏈夎皝鎶婂畠璁劇疆涓鴻秴鍑篶pu涓暟鐨勫鹼紝褰撶劧涓嶅彧鏄痗pu涓暟鐨?鍊嶏紝鑰屾槸涓嬮潰鐨凪AX_THREADS 100鐢氳嚦鏇村ぇ銆?/span>
瀵逛簬榪欎釜鍊肩殑璁懼畾錛宮sdn騫舵病鏈夎闈炲緱璁炬垚cpu涓暟鐨?鍊嶏紝鑰屼笖涔熸病鏈夋妸鍑忓皯綰跨▼涔嬮棿涓婁笅鏂囦氦鎹㈣繖浜涘獎鍝嶆壇鍒拌繖閲屾潵銆侷/O Completion Ports MSDN:"If your transaction required a lengthy computation, a larger concurrency value will allow more threads to run. Each completion packet may take longer to finish, but more completion packets will be processed at the same time. "銆?/span>
    瀵逛簬struct OVERLAPPED錛屾垜浠父浼氬涓嬫墿灞曪紝
typedef struct {
  WSAOVERLAPPED overlapped; //must be first member?   鏄殑錛屽繀欏繪槸絎竴涓傚鏋滀綘涓嶈偗瀹氾紝浣犲彲浠ヨ瘯璇曘?/span>
  SOCKET client_s;
  SOCKADDR_IN client_addr;
  WORD optCode;//1--read,2--send.  鏈変漢甯鎬細瀹氫箟榪欎釜鏁版嵁鎴愬憳錛屼絾涔熸湁浜轟笉鐢紝浜夎鍦╯end/WSASend,姝ゆ椂鐨勫悓姝ュ拰寮傛鏄惁鏈夊繀瑕侊紵 鑷沖皯鎴戜笅闈㈢殑server鏇存湰灝辨病鐢ㄥ畠銆?/span>
  char buf[MAX_BUF_SIZE];
  WSABUF wsaBuf;//inited ?  榪欎釜涓嶈蹇樹簡錛?/span>
  DWORD numberOfBytesTransferred;
  DWORD flags;   

}QSSOverlapped;//for per connection
鎴戜笅闈㈢殑server妗嗘灦鐨勫熀鏈濇兂鏄?
One connection VS one thread in worker thread pool ,worker thread performs completionWorkerRoutine.
A Acceptor thread 涓撻棬鐢ㄦ潵accept socket,鍏寵仈鑷矷OCP,騫禬SARecv:post Recv Completion Packet to IOCP.
鍦╟ompletionWorkerRoutine涓湁浠ヤ笅鐨勮亴璐?
1.handle request,褰撳繖鏃跺鍔燾ompletionWorkerThread鏁伴噺浣嗕笉瓚呰繃maxThreads,post Recv Completion Packet to IOCP.
2.timeout鏃舵鏌ユ槸鍚︾┖闂插拰褰撳墠completionWorkerThread鏁伴噺,褰撶┖闂叉椂淇濇寔鎴栧噺灝戣嚦minThreads鏁伴噺.
3.瀵規(guī)墍鏈堿ccepted-socket綆$悊鐢熷懡鍛ㄦ湡,榪欓噷鍒╃敤緋葷粺鐨刱eepalive probes,鑻ユ兂瀹炵幇涓氬姟灞?蹇冭煩鎺㈡祴"鍙渶灝哘SS_SIO_KEEPALIVE_VALS_TIMEOUT 鏀瑰洖緋葷粺榛樿鐨?灝忔椂.
涓嬮潰緇撳悎婧愪唬鐮?嫻呮瀽涓涓婭OCP:
socketserver.h
#ifndef __Q_SOCKET_SERVER__
#define __Q_SOCKET_SERVER__
#include <winsock2.h>
#include <mstcpip.h>
#define QSS_SIO_KEEPALIVE_VALS_TIMEOUT 30*60*1000
#define QSS_SIO_KEEPALIVE_VALS_INTERVAL 5*1000

#define MAX_THREADS 100
#define MAX_THREADS_MIN  10
#define MIN_WORKER_WAIT_TIMEOUT  20*1000
#define MAX_WORKER_WAIT_TIMEOUT  60*MIN_WORKER_WAIT_TIMEOUT

#define MAX_BUF_SIZE 1024

/*褰揂ccepted socket鍜宻ocket鍏抽棴鎴栧彂鐢熷紓甯告椂鍥炶皟CSocketLifecycleCallback*/
typedef void (*CSocketLifecycleCallback)(SOCKET cs,int lifecycle);//lifecycle:0:OnAccepted,-1:OnClose//娉ㄦ剰OnClose姝ゆ椂鐨剆ocket鏈繀鍙敤,鍙兘宸茬粡琚潪姝e父鍏抽棴鎴栧叾浠栧紓甯?

/*鍗忚澶勭悊鍥炶皟*/
typedef int (*InternalProtocolHandler)(LPWSAOVERLAPPED overlapped);//return -1:SOCKET_ERROR

typedef struct Q_SOCKET_SERVER SocketServer;
DWORD initializeSocketServer(SocketServer ** ssp,WORD passive,WORD port,CSocketLifecycleCallback cslifecb,InternalProtocolHandler protoHandler,WORD minThreads,WORD maxThreads,long workerWaitTimeout);
DWORD startSocketServer(SocketServer *ss);
DWORD shutdownSocketServer(SocketServer *ss);

#endif
 qsocketserver.c      綆縐?qss,鐩稿簲鐨凮VERLAPPED綆縐皅ssOl.
#include "socketserver.h"
#include "stdio.h"
typedef struct {  
  WORD passive;//daemon
  WORD port;
  WORD minThreads;
  WORD maxThreads;
  volatile long lifecycleStatus;//0-created,1-starting, 2-running,3-stopping,4-exitKeyPosted,5-stopped 
  long  workerWaitTimeout;//wait timeout  
  CRITICAL_SECTION QSS_LOCK;
  volatile long workerCounter;
  volatile long currentBusyWorkers;
  volatile long CSocketsCounter;//Accepted-socket寮曠敤璁℃暟
  CSocketLifecycleCallback cslifecb;
  InternalProtocolHandler protoHandler;
  WORD wsaVersion;//=MAKEWORD(2,0);
  WSADATA wsData;
  SOCKET server_s;
  SOCKADDR_IN serv_addr;
  HANDLE iocpHandle;
}QSocketServer;

typedef struct {
  WSAOVERLAPPED overlapped;  
  SOCKET client_s;
  SOCKADDR_IN client_addr;
  WORD optCode;
  char buf[MAX_BUF_SIZE];
  WSABUF wsaBuf;
  DWORD numberOfBytesTransferred;
  DWORD flags;
}QSSOverlapped;

DWORD  acceptorRoutine(LPVOID);
DWORD  completionWorkerRoutine(LPVOID);

static void adjustQSSWorkerLimits(QSocketServer *qss){
  /*adjust size and timeout.*/
  /*if(qss->maxThreads <= 0) {
   qss->maxThreads = MAX_THREADS;
        } else if (qss->maxThreads < MAX_THREADS_MIN) {            
         qss->maxThreads = MAX_THREADS_MIN;
        }
        if(qss->minThreads >  qss->maxThreads) {
         qss->minThreads =  qss->maxThreads;
        }
        if(qss->minThreads <= 0) {
            if(1 == qss->maxThreads) {
             qss->minThreads = 1;
            } else {
             qss->minThreads = qss->maxThreads/2;
            }
        }
        
        if(qss->workerWaitTimeout<MIN_WORKER_WAIT_TIMEOUT) 
         qss->workerWaitTimeout=MIN_WORKER_WAIT_TIMEOUT;
        if(qss->workerWaitTimeout>MAX_WORKER_WAIT_TIMEOUT)
         qss->workerWaitTimeout=MAX_WORKER_WAIT_TIMEOUT;        */
}

typedef struct{
 QSocketServer * qss;
 HANDLE th;
}QSSWORKER_PARAM;

static WORD addQSSWorker(QSocketServer *qss,WORD addCounter){
 WORD res=0;
 if(qss->workerCounter<qss->minThreads||(qss->currentBusyWorkers==qss->workerCounter&&qss->workerCounter<qss->maxThreads)){
  DWORD threadId;
  QSSWORKER_PARAM * pParam=NULL;
  int i=0;  
  EnterCriticalSection(&qss->QSS_LOCK);
  if(qss->workerCounter+addCounter<=qss->maxThreads)
   for(;i<addCounter;i++)
   {
    pParam=malloc(sizeof(QSSWORKER_PARAM));
    if(pParam){
     pParam->th=CreateThread(NULL,0,(LPTHREAD_START_ROUTINE)completionWorkerRoutine,pParam,CREATE_SUSPENDED,&threadId);
     pParam->qss=qss;
     ResumeThread(pParam->th);
     qss->workerCounter++,res++; 
    }    
   }  
  LeaveCriticalSection(&qss->QSS_LOCK);
 }  
 return res;
}

static void SOlogger(const char * msg,SOCKET s,int clearup){
 perror(msg);
 if(s>0)
 closesocket(s);
 if(clearup)
 WSACleanup();
}

static int _InternalEchoProtocolHandler(LPWSAOVERLAPPED overlapped){
 QSSOverlapped *qssOl=(QSSOverlapped *)overlapped;
 
 printf("numOfT:%d,WSARecvd:%s,\n",qssOl->numberOfBytesTransferred,qssOl->buf);
 //Sleep(500); 
 return send(qssOl->client_s,qssOl->buf,qssOl->numberOfBytesTransferred,0);
}

DWORD initializeSocketServer(SocketServer ** ssp,WORD passive,WORD port,CSocketLifecycleCallback cslifecb,InternalProtocolHandler protoHandler,WORD minThreads,WORD maxThreads,long workerWaitTimeout){
 QSocketServer * qss=malloc(sizeof(QSocketServer));
 qss->passive=passive>0?1:0;
 qss->port=port;
 qss->minThreads=minThreads;
 qss->maxThreads=maxThreads;
 qss->workerWaitTimeout=workerWaitTimeout;
 qss->wsaVersion=MAKEWORD(2,0); 
 qss->lifecycleStatus=0;
 InitializeCriticalSection(&qss->QSS_LOCK);
 qss->workerCounter=0;
 qss->currentBusyWorkers=0;
 qss->CSocketsCounter=0;
 qss->cslifecb=cslifecb,qss->protoHandler=protoHandler;
 if(!qss->protoHandler)
  qss->protoHandler=_InternalEchoProtocolHandler; 
 adjustQSSWorkerLimits(qss);
 *ssp=(SocketServer *)qss;
 return 1;
}

DWORD startSocketServer(SocketServer *ss){ 
 QSocketServer * qss=(QSocketServer *)ss;
 if(qss==NULL||InterlockedCompareExchange(&qss->lifecycleStatus,1,0))
  return 0; 
 qss->serv_addr.sin_family=AF_INET;
 qss->serv_addr.sin_port=htons(qss->port);
 qss->serv_addr.sin_addr.s_addr=INADDR_ANY;//inet_addr("127.0.0.1");
 if(WSAStartup(qss->wsaVersion,&qss->wsData)){  
  /*榪欓噷榪樻湁涓彃鏇插氨鏄繖涓猈SAStartup琚皟鐢ㄧ殑鏃跺?瀹冨眳鐒朵細鍚姩涓鏉¢澶栫殑綰跨▼,褰撶劧紼嶅悗榪欐潯綰跨▼浼氳嚜鍔ㄩ鍑虹殑.涓嶇煡WSAClearup鍙堜細濡備綍?......*/

  SOlogger("WSAStartup failed.\n",0,0);
  return 0;
 }
 qss->server_s=socket(AF_INET,SOCK_STREAM,IPPROTO_IP);
 if(qss->server_s==INVALID_SOCKET){  
  SOlogger("socket failed.\n",0,1);
  return 0;
 }
 if(bind(qss->server_s,(LPSOCKADDR)&qss->serv_addr,sizeof(SOCKADDR_IN))==SOCKET_ERROR){  
  SOlogger("bind failed.\n",qss->server_s,1);
  return 0;
 }
 if(listen(qss->server_s,SOMAXCONN)==SOCKET_ERROR)/*榪欓噷鏉ヨ皥璋?strong>backlog,寰堝浜轟笉鐭ラ亾璁炬垚浣曞?鎴戣鍒拌繃1,5,50,100鐨?鏈変漢璇磋瀹氱殑瓚婂ぇ瓚婅楄祫婧?鐨勭‘,榪欓噷璁炬垚SOMAXCONN涓嶄唬琛╳indows浼氱湡鐨勪嬌鐢⊿OMAXCONN,鑰屾槸" If set to SOMAXCONN, the underlying service provider responsible for socket s will set the backlog to a maximum reasonable value. "錛屽悓鏃跺湪鐜板疄鐜涓紝涓嶅悓鎿嶄綔緋葷粺鏀寔TCP緙撳啿闃熷垪鏈夋墍涓嶅悓錛屾墍浠ヨ繕涓嶅璁╂搷浣滅郴緇熸潵鍐沖畾瀹冪殑鍊箋傚儚Apache榪欑鏈嶅姟鍣細
#ifndef DEFAULT_LISTENBACKLOG
#define DEFAULT_LISTENBACKLOG 511
#endif
*/
    {        
  SOlogger("listen failed.\n",qss->server_s,1);
        return 0;
    }
 qss->iocpHandle=CreateIoCompletionPort(INVALID_HANDLE_VALUE,NULL,0,/*NumberOfConcurrentThreads-->*/qss->maxThreads);
 //initialize worker for completion routine.
 addQSSWorker(qss,qss->minThreads);  
 qss->lifecycleStatus=2;
 {
  QSSWORKER_PARAM * pParam=malloc(sizeof(QSSWORKER_PARAM));
  pParam->qss=qss;
  pParam->th=NULL;
  if(qss->passive){
   DWORD threadId;
   pParam->th=CreateThread(NULL,0,(LPTHREAD_START_ROUTINE)acceptorRoutine,pParam,0,&threadId); 
  }else
   return acceptorRoutine(pParam);
 }
 return 1;
}

DWORD shutdownSocketServer(SocketServer *ss){
 QSocketServer * qss=(QSocketServer *)ss;
 if(qss==NULL||InterlockedCompareExchange(&qss->lifecycleStatus,3,2)!=2)
  return 0; 
 closesocket(qss->server_s/*listen-socket*/);//..other accepted-sockets associated with the listen-socket will not be closed,except WSACleanup is called.. 
 if(qss->CSocketsCounter==0)
  qss->lifecycleStatus=4,PostQueuedCompletionStatus(qss->iocpHandle,0,-1,NULL);
 WSACleanup();  
 return 1;
}

DWORD  acceptorRoutine(LPVOID ss){
 QSSWORKER_PARAM * pParam=(QSSWORKER_PARAM *)ss;
 QSocketServer * qss=pParam->qss;
 HANDLE curThread=pParam->th;
 QSSOverlapped *qssOl=NULL;
 SOCKADDR_IN client_addr;
 int client_addr_leng=sizeof(SOCKADDR_IN);
 SOCKET cs; 
 free(pParam);
 while(1){  
  printf("accept starting.....\n");
  cs/*Accepted-socket*/=accept(qss->server_s,(LPSOCKADDR)&client_addr,&client_addr_leng);
  if(cs==INVALID_SOCKET)
        {
   printf("accept failed:%d\n",GetLastError());   
            break;
        }else{//SO_KEEPALIVE,SIO_KEEPALIVE_VALS 榪欓噷鏄埄鐢ㄧ郴緇熺殑"蹇冭煩鎺㈡祴",keepalive probes.linux:setsockopt,SOL_TCP:TCP_KEEPIDLE,TCP_KEEPINTVL,TCP_KEEPCNT
            struct tcp_keepalive alive,aliveOut;
            int so_keepalive_opt=1;
            DWORD outDW;
            if(!setsockopt(cs,SOL_SOCKET,SO_KEEPALIVE,(char *)&so_keepalive_opt,sizeof(so_keepalive_opt))){
               alive.onoff=TRUE;
               alive.keepalivetime=QSS_SIO_KEEPALIVE_VALS_TIMEOUT;
               alive.keepaliveinterval=QSS_SIO_KEEPALIVE_VALS_INTERVAL;
               if(WSAIoctl(cs,SIO_KEEPALIVE_VALS,&alive,sizeof(alive),&aliveOut,sizeof(aliveOut),&outDW,NULL,NULL)==SOCKET_ERROR){
                    printf("WSAIoctl SIO_KEEPALIVE_VALS failed:%d\n",GetLastError());   
                    break;
                }

            }else{
                     printf("setsockopt SO_KEEPALIVE failed:%d\n",GetLastError());   
                     break;
            }  
  }
  
  CreateIoCompletionPort((HANDLE)cs,qss->iocpHandle,cs,0);
  if(qssOl==NULL){
   qssOl=malloc(sizeof(QSSOverlapped));   
  }
  qssOl->client_s=cs;
  qssOl->wsaBuf.len=MAX_BUF_SIZE,qssOl->wsaBuf.buf=qssOl->buf,qssOl->numberOfBytesTransferred=0,qssOl->flags=0;//initialize WSABuf.
  memset(&qssOl->overlapped,0,sizeof(WSAOVERLAPPED));  
  {
   DWORD lastErr=GetLastError();
   int ret=0;
   SetLastError(0);
   ret=WSARecv(cs,&qssOl->wsaBuf,1,&qssOl->numberOfBytesTransferred,&qssOl->flags,&qssOl->overlapped,NULL);
   if(ret==0||(ret==SOCKET_ERROR&&GetLastError()==WSA_IO_PENDING)){
    InterlockedIncrement(&qss->CSocketsCounter);//Accepted-socket璁℃暟閫掑.
    if(qss->cslifecb)
     qss->cslifecb(cs,0);
    qssOl=NULL;
   }    
   
   if(!GetLastError())
    SetLastError(lastErr);
  }
  
  printf("accept flags:%d ,cs:%d.\n",GetLastError(),cs);
 }//end while.

 if(qssOl)
  free(qssOl);
 if(qss)
  shutdownSocketServer((SocketServer *)qss);
 if(curThread)
  CloseHandle(curThread);

 return 1;
}

static int postRecvCompletionPacket(QSSOverlapped * qssOl,int SOErrOccurredCode){ 
 int SOErrOccurred=0; 
 DWORD lastErr=GetLastError();
 SetLastError(0);
 //SOCKET_ERROR:-1,WSA_IO_PENDING:997
 if(WSARecv(qssOl->client_s,&qssOl->wsaBuf,1,&qssOl->numberOfBytesTransferred,&qssOl->flags,&qssOl->overlapped,NULL)==SOCKET_ERROR
  &&GetLastError()!=WSA_IO_PENDING)//this case lastError maybe 64, 10054 
 {
  SOErrOccurred=SOErrOccurredCode;  
 }      
 if(!GetLastError())
  SetLastError(lastErr); 
 if(SOErrOccurred)
  printf("worker[%d] postRecvCompletionPacket SOErrOccurred=%d,preErr:%d,postedErr:%d\n",GetCurrentThreadId(),SOErrOccurred,lastErr,GetLastError());
 return SOErrOccurred;
}

DWORD  completionWorkerRoutine(LPVOID ss){
 QSSWORKER_PARAM * pParam=(QSSWORKER_PARAM *)ss;
 QSocketServer * qss=pParam->qss;
 HANDLE curThread=pParam->th;
 QSSOverlapped * qssOl=NULL;
 DWORD numberOfBytesTransferred=0;
 ULONG_PTR completionKey=0;
 int postRes=0,handleCode=0,exitCode=0,SOErrOccurred=0; 
 free(pParam);
 while(!exitCode){
  SetLastError(0);
  if(GetQueuedCompletionStatus(qss->iocpHandle,&numberOfBytesTransferred,&completionKey,(LPOVERLAPPED *)&qssOl,qss->workerWaitTimeout)){
   if(completionKey==-1&&qss->lifecycleStatus>=4)
   {
    printf("worker[%d] completionKey -1:%d \n",GetCurrentThreadId(),GetLastError());
    if(qss->workerCounter>1)
     PostQueuedCompletionStatus(qss->iocpHandle,0,-1,NULL);
    exitCode=1;
    break;
   }
   if(numberOfBytesTransferred>0){   
    
    InterlockedIncrement(&qss->currentBusyWorkers);
    addQSSWorker(qss,1);
    handleCode=qss->protoHandler((LPWSAOVERLAPPED)qssOl);    
    InterlockedDecrement(&qss->currentBusyWorkers);    
    
    if(handleCode>=0){
     SOErrOccurred=postRecvCompletionPacket(qssOl,1);
    }else
     SOErrOccurred=2;    
   }else{
    printf("worker[%d] numberOfBytesTransferred==0 ***** closesocket servS or cs *****,%d,%d ,ol is:%d\n",GetCurrentThreadId(),GetLastError(),completionKey,qssOl==NULL?0:1);
    SOErrOccurred=3;     
   }  
  }else{ //GetQueuedCompletionStatus rtn FALSE, lastError 64 ,995[timeout worker thread exit.] ,WAIT_TIMEOUT:258        
   if(qssOl){
    SOErrOccurred=postRecvCompletionPacket(qssOl,4);
   }else {    

    printf("worker[%d] GetQueuedCompletionStatus F:%d \n",GetCurrentThreadId(),GetLastError());
    if(GetLastError()!=WAIT_TIMEOUT){
     exitCode=2;     
    }else{//wait timeout     
     if(qss->lifecycleStatus!=4&&qss->currentBusyWorkers==0&&qss->workerCounter>qss->minThreads){
      EnterCriticalSection(&qss->QSS_LOCK);
      if(qss->lifecycleStatus!=4&&qss->currentBusyWorkers==0&&qss->workerCounter>qss->minThreads){
       qss->workerCounter--;//until qss->workerCounter decrease to qss->minThreads
       exitCode=3;      
      }
      LeaveCriticalSection(&qss->QSS_LOCK);
     }
    }    
   }    
  }//end GetQueuedCompletionStatus.

  if(SOErrOccurred){   
   if(qss->cslifecb)
    qss->cslifecb(qssOl->client_s,-1);
   /*if(qssOl)*/{
    closesocket(qssOl->client_s);
    free(qssOl);
   }
   if(InterlockedDecrement(&qss->CSocketsCounter)==0&&qss->lifecycleStatus>=3){    
    //for qss workerSize,PostQueuedCompletionStatus -1
    qss->lifecycleStatus=4,PostQueuedCompletionStatus(qss->iocpHandle,0,-1,NULL);        
    exitCode=4;
   }
  }
  qssOl=NULL,numberOfBytesTransferred=0,completionKey=0,SOErrOccurred=0;//for net while.
 }//end while.

 //last to do 
 if(exitCode!=3){ 
  int clearup=0;
  EnterCriticalSection(&qss->QSS_LOCK);
  if(!--qss->workerCounter&&qss->lifecycleStatus>=4){//clearup QSS
    clearup=1;
  }
  LeaveCriticalSection(&qss->QSS_LOCK);
  if(clearup){
   DeleteCriticalSection(&qss->QSS_LOCK);
   CloseHandle(qss->iocpHandle);
   free(qss); 
  }
 }
 CloseHandle(curThread);
 return 1;
}
------------------------------------------------------------------------------------------------------------------------
    瀵逛簬IOCP鐨凩astError鐨勮鯨鍒拰澶勭悊鏄釜闅劇偣,鎵浠ヨ娉ㄦ剰鎴戠殑completionWorkerRoutine鐨剋hile緇撴瀯,
緇撴瀯濡備笅:
while(!exitCode){
    if(completionKey==-1){...break;}
    if(GetQueuedCompletionStatus){/*鍦ㄨ繖涓猧f浣撲腑鍙浣犳姇閫掔殑OVERLAPPED is not NULL,閭d箞榪欓噷浣犲緱鍒扮殑灝辨槸瀹?/strong>.*/
        if(numberOfBytesTransferred>0){
               /*鍦ㄨ繖閲宧andle request,璁板緱瑕佺戶緇姇閫掍綘鐨凮VERLAPPED鍝? */
        }else{
              /*榪欓噷鍙兘瀹㈡埛绔垨鏈嶅姟绔痗losesocket(the socket),浣嗘槸OVERLAPPED is not NULL,鍙浣犳姇閫掔殑涓嶄負NULL!*/
        }
    }else{/*鍦ㄨ繖閲岀殑if浣撲腑,铏界劧GetQueuedCompletionStatus return FALSE,浣嗘槸涓嶄唬琛∣VERLAPPED涓瀹氫負NULL.鐗瑰埆鏄疧VERLAPPED is not NULL鐨勬儏鍐典笅,涓嶈浠ヤ負LastError鍙戠敓浜?灝變唬琛ㄥ綋鍓嶇殑socket鏃犵敤鎴栧彂鐢熻嚧鍛界殑寮傚父,姣斿鍙戠敓lastError:995榪欑鎯呭喌涓嬫鏃剁殑socket鏈夊彲鑳芥槸涓鍒囨甯哥殑鍙敤鐨?浣犱笉搴旇鍏抽棴瀹?/strong>.*/
        if(OVERLAPPED is not NULL){
             /*榪欑鎯呭喌涓?璇蜂笉綆?7,21緇х畫鎶曢掑惂!鍦ㄦ姇閫掑悗鍐嶆嫻嬮敊璇?/strong>.*/
        }else{ 

        }
    }
  if(socket error occured){

  }
  prepare for next while.

    琛屾枃浠撲績,闅懼厤鏈夐敊璇垨涓嶈凍涔嬪,甯屾湜澶у韙婅穬鎸囨璇勮,璋㈣阿!

    榪欎釜妯″瀷鍦ㄦц兘涓婅繕鏄湁鏀硅繘鐨勭┖闂村摝錛?/strong>


from:

http://m.shnenglu.com/adapterofcoms/archive/2010/06/26/118781.aspx



chatler 2010-08-25 20:42 鍙戣〃璇勮
]]>
涓涓熀浜嶦vent Poll(epoll)鐨凾CP Server Framework,嫻呮瀽epollhttp://m.shnenglu.com/beautykingdom/archive/2010/08/25/124730.htmlchatlerchatlerWed, 25 Aug 2010 12:41:00 GMThttp://m.shnenglu.com/beautykingdom/archive/2010/08/25/124730.htmlhttp://m.shnenglu.com/beautykingdom/comments/124730.htmlhttp://m.shnenglu.com/beautykingdom/archive/2010/08/25/124730.html#Feedback0http://m.shnenglu.com/beautykingdom/comments/commentRss/124730.htmlhttp://m.shnenglu.com/beautykingdom/services/trackbacks/124730.html闃呰鍏ㄦ枃

chatler 2010-08-25 20:41 鍙戣〃璇勮
]]>
TCP: SYN ACK FIN RST PSH URG 璇﹁Вhttp://m.shnenglu.com/beautykingdom/archive/2010/07/16/120546.htmlchatlerchatlerFri, 16 Jul 2010 06:14:00 GMThttp://m.shnenglu.com/beautykingdom/archive/2010/07/16/120546.htmlhttp://m.shnenglu.com/beautykingdom/comments/120546.htmlhttp://m.shnenglu.com/beautykingdom/archive/2010/07/16/120546.html#Feedback0http://m.shnenglu.com/beautykingdom/comments/commentRss/120546.htmlhttp://m.shnenglu.com/beautykingdom/services/trackbacks/120546.html 鐗堟潈澹版槑錛氳漿杞芥椂璇蜂互瓚呴摼鎺ュ艦寮忔爣鏄庢枃绔犲師濮嬪嚭澶勫拰浣滆呬俊鎭強鏈0鏄?/a>
http://xufish.blogbus.com/logs/40536553.html

TCP 鐨勪笁嬈℃彙鎵?/strong>鏄庝箞榪涜鐨勪簡錛氬彂閫佺鍙戦佷竴涓猄YN=1錛孉CK=0鏍囧織鐨勬暟鎹寘緇欐帴鏀剁錛岃姹傝繘琛岃繛鎺ワ紝榪欐槸絎竴嬈℃彙鎵嬶紱鎺ユ敹绔敹鍒拌 姹傚茍涓斿厑璁歌繛鎺ョ殑璇濓紝灝變細鍙戦佷竴涓猄YN=1錛孉CK=1鏍囧織鐨勬暟鎹寘緇欏彂閫佺錛屽憡璇夊畠錛屽彲浠ラ氳浜嗭紝騫朵笖璁╁彂閫佺鍙戦佷竴涓‘璁ゆ暟鎹寘錛岃繖鏄浜屾鎻℃墜錛? 鏈鍚庯紝鍙戦佺鍙戦佷竴涓猄YN=0錛孉CK=1鐨勬暟鎹寘緇欐帴鏀剁錛屽憡璇夊畠榪炴帴宸茶紜錛岃繖灝辨槸絎笁嬈℃彙鎵嬨備箣鍚庯紝涓涓猅CP榪炴帴寤虹珛錛屽紑濮嬮氳銆?/p>

*SYN錛氬悓姝ユ爣蹇?br style="line-height: normal;">鍚屾搴忓垪緙栧彿(Synchronize Sequence Numbers)鏍忔湁鏁堛傝鏍囧織浠呭湪涓夋鎻℃墜寤虹珛TCP榪炴帴鏃舵湁鏁堛傚畠鎻愮ずTCP榪炴帴鐨勬湇鍔$媯鏌ュ簭鍒楃紪鍙鳳紝璇ュ簭鍒楃紪鍙蜂負TCP榪炴帴鍒濆绔?涓鑸槸瀹㈡埛 绔?鐨勫垵濮嬪簭鍒楃紪鍙楓傚湪榪欓噷錛屽彲浠ユ妸TCP搴忓垪緙栧彿鐪嬩綔鏄竴涓寖鍥翠粠0鍒?錛?94錛?67錛?95鐨?2浣嶈鏁板櫒銆傞氳繃TCP榪炴帴浜ゆ崲鐨勬暟鎹腑姣忎竴涓瓧 鑺傞兘緇忚繃搴忓垪緙栧彿銆傚湪TCP鎶ュご涓殑搴忓垪緙栧彿鏍忓寘鎷簡TCP鍒嗘涓涓涓瓧鑺傜殑搴忓垪緙栧彿銆?/p>

*ACK錛氱‘璁ゆ爣蹇?br style="line-height: normal;">紜緙栧彿(Acknowledgement Number)鏍忔湁鏁堛傚ぇ澶氭暟鎯呭喌涓嬭鏍囧織浣嶆槸緗綅鐨勩俆CP鎶ュご鍐呯殑紜緙栧彿鏍忓唴鍖呭惈鐨勭‘璁ょ紪鍙?w+1錛孎igure-1)涓轟笅涓涓鏈熺殑搴忓垪緙栧彿錛? 鍚屾椂鎻愮ず榪滅緋葷粺宸茬粡鎴愬姛鎺ユ敹鎵鏈夋暟鎹?/p>

*RST錛氬浣嶆爣蹇?br style="line-height: normal;">澶嶄綅鏍囧織鏈夋晥銆傜敤浜庡浣嶇浉搴旂殑TCP榪炴帴銆?/p>

*URG錛氱揣鎬ユ爣蹇?br style="line-height: normal;">绱ф?The urgent pointer) 鏍囧織鏈夋晥銆傜揣鎬ユ爣蹇楃疆浣嶏紝

*PSH錛氭帹鏍囧織
璇? 鏍囧織緗綅鏃訛紝鎺ユ敹绔笉灝嗚鏁版嵁榪涜闃熷垪澶勭悊錛岃屾槸灝藉彲鑳藉揩灝嗘暟鎹漿鐢卞簲鐢ㄥ鐞嗐傚湪澶勭悊 telnet 鎴?rlogin 絳変氦浜掓ā寮忕殑榪炴帴鏃訛紝璇ユ爣蹇楁繪槸緗綅鐨勩?/p>

*FIN錛氱粨鏉熸爣蹇?br style="line-height: normal;">甯︽湁璇ユ爣蹇楃疆浣嶇殑鏁版嵁鍖呯敤鏉ョ粨鏉熶竴涓猅CP鍥炶瘽錛屼絾瀵瑰簲绔彛浠嶅浜庡紑鏀劇姸鎬侊紝鍑嗗鎺ユ敹鍚庣畫鏁版嵁銆?/p>

=============================================================

涓夋鎻℃墜Three-way Handshake

涓涓櫄鎷熻繛鎺ョ殑寤虹珛鏄氳繃涓夋鎻℃墜鏉ュ疄鐜扮殑

1. (B) --> [SYN] --> (A)

鍋囧鏈? 鍔″櫒A鍜屽鎴鋒満B閫氳. 褰揂瑕佸拰B閫氫俊鏃訛紝B棣栧厛鍚慉鍙戜竴涓猄YN (Synchronize) 鏍囪鐨勫寘錛屽憡璇堿璇鋒眰寤虹珛榪炴帴.

娉ㄦ剰: 涓涓? SYN鍖呭氨鏄粎SYN鏍囪璁句負1鐨凾CP鍖?鍙傝TCP鍖呭ごResources). 璁よ瘑鍒拌繖鐐瑰緢閲嶈錛屽彧鏈夊綋A鍙楀埌B鍙戞潵鐨凷YN鍖咃紝鎵嶅彲寤虹珛榪炴帴錛岄櫎姝や箣澶栧埆鏃犱粬娉曘傚洜姝わ紝濡傛灉浣犵殑闃茬伀澧欎涪寮冩墍鏈夌殑鍙戝線澶栫綉鎺ュ彛鐨凷YN鍖咃紝閭d箞浣犲皢涓? 鑳借澶栭儴浠諱綍涓繪満涓誨姩寤虹珛榪炴帴銆?br style="line-height: normal;">
2. (B) <-- [SYN/ACK] <--(A)

鎺ョ潃錛孉鏀跺埌鍚庝細鍙戜竴涓SYN鍖呯殑紜鍖?SYN/ACK)鍥? 鍘伙紝琛ㄧず瀵圭涓涓猄YN鍖呯殑紜錛屽茍緇х畫鎻℃墜鎿嶄綔.

娉ㄦ剰: SYN/ACK鍖呮槸浠匰YN 鍜?ACK 鏍囪涓?鐨勫寘.

3. (B) --> [ACK] --> (A)

B鏀跺埌SYN/ACK 鍖?B鍙戜竴涓‘璁ゅ寘(ACK)錛岄氱煡A榪炴帴宸插緩绔嬨傝嚦姝わ紝涓夋鎻℃墜瀹屾垚錛屼竴涓猅CP榪炴帴瀹屾垚

Note: ACK鍖呭氨鏄粎ACK 鏍囪璁句負1鐨凾CP鍖? 闇瑕佹敞鎰忕殑鏄綋涓夋鎻℃墜瀹屾垚銆佽繛鎺ュ緩绔嬩互鍚庯紝TCP榪炴帴鐨勬瘡涓寘閮戒細璁劇疆ACK浣?br style="line-height: normal;">
榪欏氨鏄負浣曡繛鎺ヨ窡韙緢閲嶈鐨勫師鍥犱簡. 娌℃湁榪炴帴璺熻釜,闃茬伀澧欏皢鏃犳硶鍒ゆ柇鏀跺埌鐨凙CK鍖呮槸鍚﹀睘浜庝竴涓凡緇忓緩绔嬬殑榪炴帴.涓鑸殑鍖呰繃婊?Ipchains)鏀跺埌ACK鍖呮椂,浼氳瀹冮氳繃(榪欑粷瀵逛笉鏄釜 濂戒富鎰?. 鑰屽綋鐘舵佸瀷闃茬伀澧欐敹鍒版縐嶅寘鏃訛紝瀹冧細鍏堝湪榪炴帴琛ㄤ腑鏌ユ壘鏄惁灞炰簬鍝釜宸插緩榪炴帴錛屽惁鍒欎涪寮冭鍖?br style="line-height: normal;">
鍥涙鎻℃墜Four-way Handshake

鍥涙鎻℃墜鐢ㄦ潵鍏抽棴宸插緩 绔嬬殑TCP榪炴帴

1. (B) --> ACK/FIN --> (A)

2. (B) <-- ACK <-- (A)

3. (B) <-- ACK/FIN <-- (A)

4. (B) --> ACK --> (A)

娉ㄦ剰: 鐢變簬TCP榪炴帴鏄弻鍚戣繛鎺? 鍥犳鍏抽棴榪炴帴闇瑕佸湪涓や釜鏂瑰悜涓婂仛銆侫CK/FIN 鍖?ACK 鍜孎IN 鏍囪璁句負1)閫氬父琚涓烘槸FIN(緇堢粨)鍖?鐒惰? 鐢變簬榪炴帴榪樻病鏈夊叧闂? FIN鍖呮繪槸鎵撲笂ACK鏍囪. 娌℃湁ACK鏍囪鑰屼粎鏈塅IN鏍囪鐨勫寘涓嶆槸鍚堟硶鐨勫寘錛屽茍涓旈氬父琚涓烘槸鎭舵剰鐨?br style="line-height: normal;">
榪炴帴澶嶄綅Resetting a connection

鍥涙鎻℃墜涓嶆槸鍏抽棴 TCP榪炴帴鐨勫敮涓鏂規(guī)硶. 鏈夋椂,濡傛灉涓繪満闇瑕佸敖蹇叧闂繛鎺?鎴栬繛鎺ヨ秴鏃?绔彛鎴栦富鏈轟笉鍙揪),RST (Reset)鍖呭皢琚彂閫? 娉ㄦ剰鍦紝鐢變簬RST鍖呬笉鏄疶CP榪炴帴涓殑蹇呴』閮ㄥ垎, 鍙互鍙彂閫丷ST鍖?鍗充笉甯CK鏍囪). 浣嗗湪姝e父鐨凾CP榪炴帴涓璕ST鍖呭彲浠ュ甫ACK紜鏍囪

璇鋒敞鎰廟ST鍖呮槸鍙? 浠ヤ笉瑕佹敹鍒版柟紜鐨?

鏃犳晥鐨凾CP鏍囪Invalid TCP Flags

鍒扮洰鍓嶄負姝紝浣犲凡緇忕湅鍒頒簡 SYN, ACK, FIN, 鍜孯ST 鏍囪. 鍙﹀錛岃繕鏈塒SH (Push) 鍜孶RG (Urgent)鏍囪.

鏈甯歌鐨勯潪娉曠粍鍚堟槸SYN/FIN 鍖? 娉ㄦ剰:鐢變簬 SYN鍖呮槸鐢ㄦ潵鍒濆鍖栬繛鎺ョ殑, 瀹冧笉鍙兘鍜?FIN鍜孯ST鏍囪涓璧峰嚭鐜? 榪欎篃鏄竴涓伓鎰忔敾鍑?

鐢變簬鐜板湪澶у鏁伴槻鐏宸茬煡 SYN/FIN 鍖? 鍒殑涓浜涚粍鍚?渚嬪SYN/FIN/PSH, SYN/FIN/RST, SYN/FIN/RST/PSH銆傚緢鏄庢樉錛屽綋緗戠粶涓嚭鐜拌繖縐嶅寘鏃訛紝寰堜綘鐨勭綉緇滆偗瀹氬彈鍒版敾鍑諱簡銆?br style="line-height: normal;">
鍒殑宸茬煡鐨勯潪娉曞寘鏈塅IN (鏃燗CK鏍囪)鍜?NULL"鍖呫傚鍚屾棭鍏堣璁虹殑錛岀敱浜嶢CK/FIN鍖呯殑鍑虹幇鏄負浜嗗叧闂竴涓猅CP榪炴帴錛岄偅涔堟甯哥殑FIN鍖呮繪槸甯︽湁 ACK 鏍囪銆?NULL"鍖呭氨鏄病鏈変換浣昑CP鏍囪鐨勫寘(URG,ACK,PSH,RST,SYN,FIN閮戒負0)銆?br style="line-height: normal;">
鍒扮洰鍓嶄負姝紝姝e父鐨勭綉 緇滄椿鍔ㄤ笅錛孴CP鍗忚鏍堜笉鍙兘浜х敓甯︽湁涓婇潰鎻愬埌鐨勪換浣曚竴縐嶆爣璁扮粍鍚堢殑TCP鍖呫傚綋浣犲彂鐜拌繖浜涗笉姝e父鐨勫寘鏃訛紝鑲畾鏈変漢瀵逛綘鐨勭綉緇滀笉鎬濂芥剰銆?br style="line-height: normal;">
UDP (鐢ㄦ埛鏁版嵁鍖呭崗璁甎ser Datagram Protocol)
TCP鏄潰鍚戣繛鎺? 鐨勶紝鑰孶DP鏄潪榪炴帴鐨勫崗璁俇DP娌℃湁瀵規(guī)帴鍙楄繘琛岀‘璁ょ殑鏍囪鍜岀‘璁ゆ満鍒躲傚涓㈠寘鐨勫鐞嗘槸鍦ㄥ簲鐢ㄥ眰鏉ュ畬鎴愮殑銆?or accidental arrival).

姝ゅ闇瑕侀噸鐐規(guī)敞鎰忕殑浜嬫儏鏄細鍦ㄦ甯告儏鍐典笅錛屽綋UDP鍖呭埌杈句竴涓叧闂殑绔彛鏃訛紝浼氳繑鍥炰竴涓猆DP澶嶄綅鍖呫傜敱浜嶶DP鏄潪闈㈠悜榪炴帴鐨? 鍥犳娌℃湁浠諱綍紜淇℃伅鏉ョ‘璁ゅ寘鏄惁姝g‘鍒拌揪鐩殑鍦般傚洜姝ゅ鏋滀綘鐨勯槻鐏涓㈠純UDP鍖咃紝瀹冧細寮鏀炬墍鏈夌殑UDP绔彛(?)銆?br style="line-height: normal;">
鐢變簬Internet 涓婃甯告儏鍐典笅涓浜涘寘灝嗚涓㈠純錛岀敋鑷蟲煇浜涘彂寰宸插叧闂鍙?闈為槻鐏鐨?鐨刄DP鍖呭皢涓嶄細鍒拌揪鐩殑錛屽畠浠皢榪斿洖涓涓浣峌DP鍖呫?br style="line-height: normal;">
鍥犱負榪欎釜鍘熷洜錛孶DP 绔彛鎵弿鎬繪槸涓嶇簿紜佷笉鍙潬鐨勩?br style="line-height: normal;">
鐪嬭搗鏉ュぇUDP鍖呯殑紕庣墖鏄父瑙佺殑DOS (Denial of Service)鏀誨嚮鐨勫父瑙佸艦寮?(榪欓噷鏈変釜DOS鏀誨嚮鐨勪緥瀛愶紝http://grc.com/dos/grcdos.htm ).

ICMP (緗戦棿鎺у埗娑堟伅鍗忚Internet Control Message Protocol)
濡傚悓鍚嶅瓧涓鏍鳳紝 ICMP鐢ㄦ潵鍦ㄤ富鏈?璺敱鍣ㄤ箣闂翠紶閫掓帶鍒朵俊鎭殑鍗忚銆?ICMP鍖呭彲浠ュ寘鍚瘖鏂俊鎭?ping, traceroute - 娉ㄦ剰鐩墠unix緋葷粺涓殑traceroute鐢║DP鍖呰屼笉鏄疘CMP)錛岄敊璇俊鎭?緗戠粶/涓繪満/绔彛 涓嶅彲杈? network/host/port unreachable), 淇℃伅(鏃墮棿鎴硉imestamp, 鍦板潃鎺╃爜address mask request, etc.)錛屾垨鎺у埗淇℃伅 (source quench, redirect, etc.) 銆?br style="line-height: normal;">
浣犲彲浠ュ湪http://www.iana.org/assignments/icmp-parameters涓? 鎵懼埌ICMP鍖呯殑綾誨瀷銆?br style="line-height: normal;">
灝界ICMP閫氬父鏄棤瀹崇殑錛岃繕鏄湁浜涚被鍨嬬殑ICMP淇℃伅闇瑕佷涪寮冦?br style="line-height: normal;">
Redirect (5), Alternate Host Address (6), Router Advertisement (9) 鑳界敤鏉ヨ漿鍙戦氳銆?br style="line-height: normal;">
Echo (8), Timestamp (13) and Address Mask Request (17) 鑳界敤鏉ュ垎鍒垽鏂富鏈烘槸鍚﹁搗鏉ワ紝鏈湴鏃墮棿鍜屽湴鍧鎺╃爜銆傛敞鎰忓畠浠槸鍜岃繑鍥炵殑淇℃伅綾誨埆鏈夊叧鐨勩傚畠浠嚜宸辨湰韜槸涓嶈兘琚埄鐢ㄧ殑錛屼絾瀹冧滑娉勯湶鍑虹殑淇℃伅瀵規(guī)敾鍑昏呮槸鏈夌敤 鐨勩?br style="line-height: normal;">
ICMP 娑堟伅鏈夋椂涔熻鐢ㄦ潵浣滀負DOS鏀誨嚮鐨勪竴閮ㄥ垎(渚嬪錛氭椽姘磒ing flood ping,姝?ping ?鍛靛懙錛屾湁瓚?ping of death)?/p>

鍖呯鐗囨敞鎰廇 Note About Packet Fragmentation

濡傛灉涓涓寘鐨勫ぇ灝忚秴榪囦簡TCP鐨勬渶澶ф闀垮害MSS (Maximum Segment Size) 鎴朚TU (Maximum Transmission Unit)錛岃兘澶熸妸姝ゅ寘鍙戝線鐩殑鐨勫敮涓鏂規(guī)硶鏄妸姝ゅ寘鍒嗙墖銆傜敱浜庡寘鍒嗙墖鏄甯哥殑錛屽畠鍙互琚埄鐢ㄦ潵鍋氭伓鎰忕殑鏀誨嚮銆?br style="line-height: normal;">
鍥犱負鍒嗙墖鐨勫寘鐨勭涓涓? 鍒嗙墖鍖呭惈涓涓寘澶達紝鑻ユ病鏈夊寘鍒嗙墖鐨勯噸緇勫姛鑳斤紝鍖呰繃婊ゅ櫒涓嶅彲鑳芥嫻嬮檮鍔犵殑鍖呭垎鐗囥傚吀鍨嬬殑鏀誨嚮Typical attacks involve in overlapping the packet data in which packet header is 鍏稿瀷鐨勬敾鍑籘ypical attacks involve in overlapping the packet data in which packet header isnormal until is it overwritten with different destination IP (or port) thereby bypassing firewall rules銆傚寘鍒嗙墖鑳戒綔涓?DOS 鏀誨嚮鐨勪竴閮ㄥ垎錛屽畠鍙互crash older IP stacks 鎴栨定姝籆PU榪炴帴鑳藉姏銆?br style="line-height: normal;">
Netfilter/Iptables涓殑榪炴帴璺熻釜浠g爜鑳借嚜鍔ㄥ仛鍒嗙墖閲嶇粍銆傚畠浠嶆湁寮辯偣錛屽彲鑳? 鍙楀埌楗卞拰榪炴帴鏀誨嚮錛屽彲浠ユ妸CPU璧勬簮鑰楀厜銆?br style="line-height: normal;">
鎻℃墜闃舵錛?br style="line-height: normal;">搴忓彿 鏂瑰悜 seq ack
1銆銆A->B 10000 0
2 B->A 20000 10000+1=10001
3 A->B 10001 20000+1=20001
瑙i噴錛?br style="line-height: normal;">1錛欰鍚態(tài)鍙戣搗 榪炴帴璇鋒眰錛屼互涓涓殢鏈烘暟鍒濆鍖朅鐨剆eq,榪欓噷鍋囪涓?0000錛屾鏃禔CK錛?

2錛欱鏀跺埌A鐨勮繛鎺ヨ姹傚悗錛屼篃浠ヤ竴涓殢鏈烘暟鍒濆鍖朆鐨剆eq錛岃繖閲屽亣璁句負20000錛屾剰鎬? 鏄細浣犵殑璇鋒眰鎴戝凡鏀跺埌錛屾垜榪欐柟鐨勬暟鎹祦灝變粠榪欎釜鏁板紑濮嬨侭鐨凙CK鏄疉鐨剆eq鍔?錛屽嵆10000錛?錛?0001

3錛欰鏀跺埌B鐨勫洖澶? 鍚庯紝瀹冪殑seq鏄畠鐨勪笂涓姹傜殑seq鍔?錛屽嵆10000錛?錛?0001錛屾剰鎬濅篃鏄細浣犵殑鍥炲鎴戞敹鍒頒簡錛屾垜榪欐柟鐨勬暟鎹祦灝變粠榪欎釜鏁板紑濮嬨侫姝ゆ椂鐨凙CK 鏄疊鐨剆eq鍔?錛屽嵆20000+1=20001


鏁版嵁浼犺緭闃舵錛?br style="line-height: normal;">搴忓彿銆銆鏂瑰悜銆銆銆銆銆銆seq ack size
23 A->B 40000 70000 1514
24 B->A 70000 40000+1514-54=41460 54
25 A->B 41460 70000+54-54=70000 1514
26 B->A 70000 41460+1514-54=42920 54
瑙i噴錛?br style="line-height: normal;">23:B鎺ユ敹鍒? A鍙戞潵鐨剆eq=40000,ack=70000,size=1514鐨勬暟鎹寘
24: 浜庢槸B鍚慉涔熷彂涓涓暟鎹寘錛屽憡璇塀錛屼綘鐨勪笂涓寘鎴戞敹鍒頒簡銆侭鐨剆eq灝變互瀹冩敹鍒扮殑鏁版嵁鍖呯殑ACK濉厖錛孉CK鏄畠鏀跺埌鐨勬暟鎹寘鐨凷EQ鍔犱笂鏁版嵁鍖呯殑澶у皬 (涓嶅寘鎷互澶綉鍗忚澶達紝IP澶達紝TCP澶?錛屼互璇佸疄B鍙戣繃鏉ョ殑鏁版嵁鍏ㄦ敹鍒頒簡銆?br style="line-height: normal;">25:A 鍦ㄦ敹鍒癇鍙戣繃鏉ョ殑ack涓?1460鐨勬暟鎹寘鏃訛紝涓鐪嬪埌41460錛屾濂芥槸瀹冪殑涓婁釜鏁版嵁鍖呯殑seq鍔犱笂鍖呯殑澶у皬錛屽氨鏄庣櫧錛屼笂嬈″彂閫佺殑鏁版嵁鍖呭凡瀹夊叏鍒拌揪銆備簬 鏄畠鍐嶅彂涓涓暟鎹寘緇橞銆傝繖涓鍦ㄥ彂閫佺殑鏁版嵁鍖呯殑seq涔熶互瀹冩敹鍒扮殑鏁版嵁鍖呯殑ACK濉厖錛孉CK灝變互瀹冩敹鍒扮殑鏁版嵁鍖呯殑seq(70000)鍔犱笂鍖呯殑 size(54)濉厖,鍗砤ck=70000+54-54(鍏ㄦ槸澶撮暱錛屾病鏁版嵁欏?銆?br style="line-height: normal;">
鍏跺疄鍦ㄦ彙鎵嬪拰緇撴潫鏃剁‘璁ゅ彿搴旇鏄鏂瑰簭鍒楀彿鍔?,浼犺緭鏁版嵁鏃跺垯鏄鏂瑰簭鍒楀彿鍔犱笂瀵規(guī)柟鎼哄甫搴? 鐢ㄥ眰鏁版嵁鐨勯暱搴?濡傛灉浠庝互澶綉鍖呰繑鍥炴潵璁$畻鎵鍔犵殑闀垮害,灝卞珜璧板集璺簡.
鍙﹀,濡傛灉瀵? 鏂規(guī)病鏈夋暟鎹繃鏉?鍒欒嚜宸辯殑紜鍙蜂笉鍙?搴忓垪鍙蜂負涓婃鐨勫簭鍒楀彿鍔犱笂鏈搴旂敤灞傛暟鎹彂閫侀暱搴?/span>

chatler 2010-07-16 14:14 鍙戣〃璇勮
]]>
NAT鐨勭己闄?/title><link>http://m.shnenglu.com/beautykingdom/archive/2010/07/13/120225.html</link><dc:creator>chatler</dc:creator><author>chatler</author><pubDate>Tue, 13 Jul 2010 07:28:00 GMT</pubDate><guid>http://m.shnenglu.com/beautykingdom/archive/2010/07/13/120225.html</guid><wfw:comment>http://m.shnenglu.com/beautykingdom/comments/120225.html</wfw:comment><comments>http://m.shnenglu.com/beautykingdom/archive/2010/07/13/120225.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://m.shnenglu.com/beautykingdom/comments/commentRss/120225.html</wfw:commentRss><trackback:ping>http://m.shnenglu.com/beautykingdom/services/trackbacks/120225.html</trackback:ping><description><![CDATA[NAT鐨勪紭鐐逛笉蹇呭璁?瀹冩彁渚涗簡涓緋誨垪鐩稿叧鎶鏈潵瀹炵幇澶氫釜鍐呯綉鐢ㄦ埛閫氳繃涓涓叕緗慽p鍜屽閮ㄩ氫俊,鏈夋晥鐨勮В鍐充簡ipv4鍦板潃涓嶅鐢ㄧ殑闂.閭d箞浣嶄簬NAT鍚? 鐨勭敤鎴蜂嬌鐢ㄧ緗慽p鐪熺殑鍜屼嬌鐢ㄥ叕緗慽p涓鏍峰悧?NAT瑙e喅浜嗘墍鏈夊湴鍧杞崲鐨勭浉鍏抽棶棰樹簡鍚?<br>涓嬮潰涓昏璁蹭竴浜汵AT涓嶆敮鎸佺殑鏂歸潰,浠ュ強鎵璋撶殑NAT 鐨?緙洪櫡".<br><br>涓浜涘簲鐢ㄥ眰鍗忚(濡俆CP鍜孲IP),鍦ㄥ畠浠殑搴旂敤灞傛暟鎹腑闇瑕佸寘鍚叕緗慖P鍦板潃.鎷縁TP鏉ヨ鍚?浼楁墍鍛ㄧ煡,FTP鏄氳繃 涓や釜涓嶅悓鐨勮繛鎺ユ潵浼犺緭鎺у埗鎶ユ枃鍜屾暟鎹姤鏂囩殑.褰撲紶杈撲竴涓枃浠舵椂,FTP鏈嶅姟鍣ㄨ姹傞氳繃鎺у埗鎶ユ枃寰楀埌鍗沖皢浼犺緭鐨勬暟鎹姤鏂囩殑緗戠粶灞傚拰浼犺緭灞傚湴鍧 (IP/PORT).濡傛灉榪欎釜鏃跺欏鎴蜂富鏈烘槸鍦∟AT涔嬪悗鐨?閭d箞鏈嶅姟鍣ㄧ鏀跺埌鐨刬p/port灝嗕細鏄疦AT杞寲鍓嶇殑縐佺綉IP鍦板潃,浠庤屼細瀵艱嚧鏂囦歡浼犺緭澶? 鏁?<br>SIP(Session Initiation Protocol)涓昏鏄潵鎺у埗闊抽浼犺緭鐨?榪欎釜鍗忚涔熼潰涓村悓鏍風(fēng)殑闂.鍥犱負SIP寤虹珛榪炴帴鏃?闇瑕佺敤鍒板嚑涓笉鍚岀殑绔彛鏉ラ氳繃RTP浼犺緭闊抽嫻?鑰屼笖榪欎簺 绔彛浠ュ強IP浼氳緙栫爜鍒伴煶棰戞祦涓?浼犺緭緇欐湇鍔″櫒绔?浠庤屽疄鐜板悗緇殑閫氫俊.<br>濡傛灉娌℃湁涓浜涚壒孌婄殑鎶鏈?濡係TUN),閭d箞NAT鏄笉鏀寔榪欎簺鍗忚鐨? 榪欎簺鍗忚緇忚繃NAT涔熻偗瀹氫細澶辮觸.<br><span style="color: #000166;">Some Application Layer protocols (such as FTP and SIP) send explicit network addresses within their application data. FTP in active mode, for example, uses separate connections for control traffic (commands) and for data traffic (file contents). When requesting a file transfer, the host making the request identifies the corresponding data connection by its network layer and transport layer addresses. If the host making the request lies behind a simple NAT firewall, the translation of the IP address and/or TCP port number makes the information received by the server invalid. The Session Initiation Protocol (SIP) controls Voice over IP (VoIP) communications and suffers the same problem. SIP may use multiple ports to set up a connection and transmit voice stream via RTP. IP addresses and port numbers are encoded in the payload data and must be known prior to the traversal of NATs. Without special techniques, such as STUN, NAT behavior is unpredictable and communications may fail.</span><br><br>涓? 闈㈣涓浜涚壒孌婄殑鎶鏈?鏉ヤ嬌NAT鏀寔榪欎簺鐗規(guī)畩鐨勫簲鐢ㄥ眰鍗忚.<br><br>鏈鐩磋鐨勬兂娉曞氨鏄?鏃㈢劧NAT淇敼浜咺P/PROT,閭d箞鎴戜滑涔熶慨鏀瑰簲鐢ㄥ眰鏁? 鎹腑鐩稿簲鐨処P/PORT.搴旂敤灞傜綉鍏?ALG)(紜歡鎴栬蔣浠墮兘琛?灝辨槸榪欐牱鏉ヨВ鍐寵繖涓棶棰樼殑.搴旂敤灞傜綉鍏寵繍琛屽湪璁劇疆浜哊AT鐨勯槻鐏璁懼涓?瀹冧細鏇存柊浼? 杈撴暟鎹腑鐨処P/PORT.鎵浠?搴旂敤灞傜綉鍏充篃蹇呴』鑳藉瑙f瀽搴旂敤灞傚崗璁?鑰屼笖瀵逛簬姣忎竴縐嶅崗璁?鍙兘闇瑕佷笉鍚岀殑搴旂敤灞傜綉鍏蟲潵鍋?<br><span style="color: #000166;">Application Layer Gateway (ALG) software or hardware may correct these problems. An ALG software module running on a NAT firewall device updates any payload data made invalid by address translation. ALGs obviously need to understand the higher-layer protocol that they need to fix, and so each protocol with this problem requires a separate ALG.</span><br><br>鍙﹀涓涓В鍐蟲闂鐨勫姙娉曞氨鏄疦AT絀塊?姝ゆ柟娉曚富瑕佸埄鐢⊿TUN鎴? ICE絳夊崗璁垨鑰呬竴浜涘拰浼氳瘽鎺у埗鐩稿叧鐨勭壒鏈夌殑鏂規(guī)硶鏉ュ疄鐜?鐞嗚涓奛AT絀塊忔渶濂借兘澶熷悓鏃墮傜敤浜庡熀浜嶵CP鍜屽熀浜嶶DP鐨勫簲鐢?浣嗘槸鍩轟簬UDP鐨勫簲鐢ㄧ浉瀵規(guī)瘮 杈冪畝鍗?鏇村箍涓烘祦浼?涔熸洿閫傚悎鍏煎涓浜涚綾葷殑NAT鍋氱┛閫?榪欐牱,搴旂敤灞傚崗璁湪璁捐鐨勬椂鍊?蹇呴』鑰冭檻鍒板彲鏀寔NAT絀塊?浣嗕竴浜涘叾浠栫被鍨嬬殑NAT(姣斿瀵? 縐癗AT)鏄棤璁哄浣曚篃涓嶈兘鍋氱┛閫忕殑.<br><span style="color: #000166;">Another possible solution to this problem is to use NAT traversal techniques using protocols such as STUN or ICE or proprietary approaches in a session border controller. NAT traversal is possible in both TCP- and UDP-based applications, but the UDP-based technique is simpler, more widely understood, and more compatible with legacy NATs. In either case, the high level protocol must be designed with NAT traversal in mind, and it does not work reliably across symmetric NATs or other poorly-behaved legacy NATs.</span><br><br><br>榪樻湁涓浜涙柟娉?姣斿UPnP (Universal Plug and Play) 鎴?Bonjour (NAT-PMP),浣嗘槸榪欎簺鏂規(guī)硶閮介渶瑕佷笓闂ㄧ殑NAT璁懼.<br><span style="color: #000166;">Other possibilities are UPnP (Universal Plug and Play) or Bonjour (NAT-PMP), but these require the cooperation of the NAT device.</span><br><br><br>澶ч儴鍒嗕紶緇熺殑瀹㈡埛-鏈嶅姟鍣ㄥ崗璁?闄や簡FTP),閮戒笉瀹氫箟3灞備互涓婄殑鏁版嵁鏍? 寮?鎵浠?涔熷氨鍙互鍜屼紶緇熺殑NAT鍏煎.瀹為檯涓?鍦ㄨ璁″簲鐢ㄥ眰鍗忚鐨勬椂鍊欏簲灝介噺閬垮厤娑夊強鍒?灞備互涓婄殑鏁版嵁,鍥犱負榪欐牱浼氫嬌瀹冨吋瀹筃AT鏃跺鏉傚寲.<br><span style="color: #000166;">Most traditional client-server protocols (FTP being the main exception), however, do not send layer 3 contact information and therefore do not require any special treatment by NATs. In fact, avoiding NAT complications is practically a requirement when designing new higher-layer protocols today.</span><br style="color: #000166;"><br><br>NAT涔熶細鍜屽埄鐢╥psec鍔犲瘑鐨勪竴浜涘簲鐢ㄥ啿紿?姣斿SIP鐢?shù)璇?濡傛灉鏈夊緢澶歋IP鐢?shù)璇濊畱证囧? NA(P)T涔嬪悗,閭d箞鍦ㄧ數(shù)璇濆埄鐢╥psc鍔犲瘑瀹冧滑鐨勪俊鍙鋒椂,濡傛灉涔熷姞瀵嗕簡port淇℃伅,閭d箞榪欏氨鎰忓懗鐫NAPT灝變笉鑳借漿鎹ort,鍙兘杞崲IP.浣嗘槸 榪欐牱灝變細瀵艱嚧鍥炴潵鐨勬暟鎹寘閮借NAT鍒板悓涓涓鎴風(fēng),浠庤屽鑷撮氫俊澶辮觸(涓嶅お鏄庣櫧).涓嶈繃,榪欎釜闂鏈夊緢澶氭柟娉曟潵瑙e喅,姣斿鐢═LS.TLS鏄繍琛屽湪絎洓 灞?OSI妯″瀷)鐨?鎵浠ュ畠涓嶅寘鍚玴ort淇℃伅.涔熷彲浠ュ湪UDP涔嬪唴鏉ュ皝瑁卛psec,TISPAN 灝辨槸鐢ㄨ繖縐嶆柟娉曟潵瀹炵幇瀹夊叏NAT杞寲鐨?<br><span style="color: #000166;">NATs can also cause problems where IPsec encryption is applied and in cases where multiple devices such as SIP phones are located behind a NAT. Phones which encrypt their signaling with IPsec encapsulate the port information within the IPsec packet meaning that NA(P)T devices cannot access and translate the port. In these cases the NA(P)T devices revert to simple NAT operation. This means that all traffic returning to the NAT will be mapped onto one client causing the service to fail. There are a couple of solutions to this problem, one is to use TLS which operates at level 4 in the OSI Reference Model and therefore does not mask the port number, or to Encapsulate the IPsec within UDP - the latter being the solution chosen by TISPAN to achieve secure NAT traversal.</span><br><br><br>Dan Kaminsky 鍦?008騫寸殑鏃跺欐彁鍑篘APT榪樹細闂存帴鐨勫獎鍝岲NS鍗忚鐨勫仴澹?涓轟簡閬垮厤DNS鏈嶅姟鍣ㄧ紦瀛樹腑姣?鍦∟A(p)T闃茬伀澧欎箣鍚庣殑DNS鏈嶅姟鍣ㄦ渶濂戒笉瑕佽漿鎹? 鏉ヨ嚜澶栭儴鐨凞NS璇鋒眰(UDP)鐨勬簮绔彛.鑰屽DNS緙撳瓨涓瘨鏀誨嚮鐨勫簲瀵規(guī)帾鏂藉氨鏄嬌鎵鏈夌殑DNS鏈嶅姟鍣ㄧ敤闅忔満鐨勭鍙f潵鎺ユ敹DNS璇鋒眰.浣嗗鏋淣A(P)T 浣緿NS璇鋒眰鐨勬簮绔彛涔熼殢鏈哄寲,閭d箞鍦∟A(P)T闃茬伀澧欏悗闈㈢殑DNS鏈嶅姟鍣ㄨ繕鏄細宕╂簝鐨?<br><span style="color: #000166;">The DNS protocol vulnerability announced by Dan Kaminsky on 2008 July 8 is indirectly affected by NAT port mapping. To avoid DNS server cache poisoning, it is highly desirable to not translate UDP source port numbers of outgoing DNS requests from any DNS server which is behind a firewall which implements NAT. The recommended work-around for the DNS vulnerability is to make all caching DNS servers use randomized UDP source ports. If the NAT function de-randomizes the UDP source ports, the DNS server will be made vulnerable.</span><br><br>浣? 浜嶯AT鍚庣殑涓繪満涓嶈兘瀹炵幇鐪熺殑绔绔殑閫氫俊,涔熶笉鑳戒嬌鐢ㄤ竴浜涘拰NAT鍐茬獊鐨刬nternat鍗忚.鑰屼笖浠庡閮ㄥ彂璧風(fēng)殑TCP榪炴帴鍜屼竴浜涙棤鐘舵佺殑鍗忚(鍒╃敤 udp鐨勪笂灞傚崗璁?涔熶笉鑳芥甯哥殑榪涜,闄ら潪NAT鎵鍦ㄨ澶囬氳繃鐩稿叧鎶鏈敮鎸佽繖浜涘崗璁?涓浜涘崗璁兘澶熷埄鐢ㄥ簲鐢ㄥ眰緗戝叧鎴栧叾浠栨妧鏈?鏉ヤ嬌鍙湁涓绔浜嶯AT鍚庣殑 閫氫俊鍙屾柟姝e父閫氫俊.浣嗚鏄弻鏂歸兘鍦∟AT鍚庡氨浼氬け璐?NAT涔熷拰涓浜涢毀閬撳崗璁?濡俰psec)鍐茬獊,鍥犱負NAT浼氫慨鏀筰p鎴杙ort,浠庤屼細浣垮崗璁殑瀹屾暣 鎬ф牎楠屽け璐?<br><span style="color: #000166;">Hosts behind NAT-enabled routers do not have end-to-end connectivity and cannot participate in some Internet protocols. Services that require the initiation of TCP connections from the outside network, or stateless protocols such as those using UDP, can be disrupted. Unless the NAT router makes a specific effort to support such protocols, incoming packets cannot reach their destination. Some protocols can accommodate one instance of NAT between participating hosts ("passive mode" FTP, for example), sometimes with the assistance of an application-level gateway (see below), but fail when both systems are separated from the Internet by NAT. Use of NAT also complicates tunneling protocols such as IPsec because NAT modifies values in the headers which interfere with the integrity checks done by IPsec and other tunneling protocols.</span><br><br><br>绔绔殑榪炴帴鏄? internet璁捐鏃剁殑涓涓噸瑕佺殑鏍稿績鐨勫熀鏈師鍒?鑰孨AT鏄繚鑳岃繖涓鍘熷垯鐨?浣嗘槸NAT鍦ㄨ璁$殑鏃跺欎篃鍏呭垎鍦拌冭檻鍒頒簡榪欎簺闂.鐜板湪鍩轟簬ipv6鐨? NAT宸茬粡琚箍娉涘叧娉?浣嗚澶歩pv6鏋舵瀯璁捐鑰呰涓篿pv6搴旇鎽掑純NAT.<br><span style="color: #000166;">End-to-end connectivity has been a core principle of the Internet, supported for example by the Internet Architecture Board. Current Internet architectural documents observe that NAT is a violation of the End-to-End Principle, but that NAT does have a valid role in careful design. There is considerably more concern with the use of IPv6 NAT, and many IPv6 architects believe IPv6 was intended to remove the need for NAT.</span><br><br><br>鐢變簬NAT鐨勮繛鎺ヨ拷韙叿鏈夌煭鏃舵晥鎬?鎵浠ュ湪鐗瑰畾鐨勫湴鍧杞崲鍏崇郴浼氬湪涓灝忔鏃墮棿鍚庡け鏁? 闄ら潪閬靛畧NAT鐨刱eep-alive鏈哄埗,鍐呯綉涓繪満涓嶆椂鐨勫幓璁塊棶澶栭儴涓繪満.榪欒嚦灝戜細閫犳垚涓浜涗笉蹇呰鐨勬秷鑰?姣斿娑堣楁墜鎸佽澶囩殑鐢?shù)閲?<br><span style="color: #000166;">Because of the short-lived nature of the stateful translation tables in NAT routers, devices on the internal network lose IP connectivity typically within a very short period of time unless they implement NAT keep-alive mechanisms by frequently accessing outside hosts. This dramatically shortens the power reserves on battery-operated hand-held devices and has thwarted more widespread deployment of such IP-native Internet-enabled devices.</span><br style="color: #000166;"><br><br>涓浜汭PS浼氱洿鎺ユ彁渚涚粰鐢ㄦ埛縐佺綉IP鍦板潃,榪欐牱鐢ㄦ埛灝卞繀欏婚氳繃IPS鐨? NAT鏉ュ拰澶栭儴INTERNET閫氫俊.榪欐牱,鐢ㄦ埛瀹為檯涓婃病鏈夊疄鐜扮瀵圭閫氫俊,涓棿鍔犱簡涓涓狪PS鐨凬AT,榪欐湁鎮(zhèn)栦簬Internet Architecture Board鍒楀嚭鐨刬nternal鏍稿績鍩烘湰鍘熷垯.<br><span style="color: #000166;">Some Internet service providers (ISPs) provide their customers only with "local" IP addresses.[citation needed]Thus, these customers must access services external to the ISP's network through NAT. As a result, the customers cannot achieve true end-to-end connectivity, in violation of the core principles of the Internet as laid out by the Internet Architecture Board.</span><br style="color: #000166;"><br>NAT 鏈鍚庣殑涓涓己闄峰氨鏄?NAT鐨勬帹騫垮拰浣跨敤,瑙e喅浜唅pv4涓婭P鍦板潃涓嶅鐢ㄧ殑闂,澶уぇ鐨勬帹榪熶簡IPV6鐨勫彂灞?<br>(璇村畠鏄紭鐐瑰ソ鍛?榪樻槸緙洪櫡濂? 鍛?)<br><span style="color: #000166;">it is possible that its [NAT] widespread use will significantly delay the need to deploy IPv6</span><br><br>Reference:<br><a target="_blank">Network address translation</a><br><br>from:<br>http://blog.chinaunix.net/u2/86590/showart.php?id=2208148<br><img src ="http://m.shnenglu.com/beautykingdom/aggbug/120225.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://m.shnenglu.com/beautykingdom/" target="_blank">chatler</a> 2010-07-13 15:28 <a href="http://m.shnenglu.com/beautykingdom/archive/2010/07/13/120225.html#Feedback" target="_blank" style="text-decoration:none;">鍙戣〃璇勮</a></div>]]></description></item><item><title>Linux涓嬮潰socket緙栫▼鐨勯潪闃誨TCP 鐮旂┒http://m.shnenglu.com/beautykingdom/archive/2010/07/07/119615.htmlchatlerchatlerWed, 07 Jul 2010 09:14:00 GMThttp://m.shnenglu.com/beautykingdom/archive/2010/07/07/119615.htmlhttp://m.shnenglu.com/beautykingdom/comments/119615.htmlhttp://m.shnenglu.com/beautykingdom/archive/2010/07/07/119615.html#Feedback0http://m.shnenglu.com/beautykingdom/comments/commentRss/119615.htmlhttp://m.shnenglu.com/beautykingdom/services/trackbacks/119615.html

tcp鍗忚鏈? 韜槸鍙潬鐨?騫朵笉絳変簬搴旂敤紼嬪簭鐢╰cp鍙戦佹暟鎹氨涓瀹氭槸鍙潬鐨?涓嶇鏄惁闃誨,send鍙戦佺殑澶у皬,騫朵笉浠h〃瀵圭recv鍒板灝戠殑鏁版嵁.

鍦?span style="line-height: normal; color: #ff0000;">闃誨妯″紡涓? send鍑芥暟鐨勮繃紼嬫槸灝嗗簲鐢ㄧ▼搴忚姹傚彂閫佺殑鏁版嵁鎷瘋礉鍒板彂閫佺紦瀛樹腑鍙戦佸茍寰楀埌紜鍚庡啀榪斿洖.浣嗙敱浜庡彂閫佺紦瀛樼殑瀛樺湪,琛ㄧ幇涓?濡傛灉鍙戦佺紦瀛樺ぇ灝忔瘮璇鋒眰鍙戦佺殑澶? 灝忚澶?閭d箞send鍑芥暟绔嬪嵆榪斿洖,鍚屾椂鍚戠綉緇滀腑鍙戦佹暟鎹?鍚﹀垯,send鍚戠綉緇滃彂閫佺紦瀛樹腑涓嶈兘瀹圭撼鐨勯偅閮ㄥ垎鏁版嵁,騫剁瓑寰呭绔‘璁ゅ悗鍐嶈繑鍥?鎺ユ敹绔彧瑕佸皢 鏁版嵁鏀跺埌鎺ユ敹緙撳瓨涓?灝變細紜,騫朵笉涓瀹氳絳夊緟搴旂敤紼嬪簭璋冪敤recv);

鍦?/span>闈為樆濉炴ā寮?/span>涓?send鍑芥暟鐨勮繃紼嬩粎浠呮槸灝嗘暟鎹嫹 璐濆埌鍗忚鏍堢殑緙撳瓨鍖鴻屽凡,濡傛灉緙撳瓨鍖哄彲鐢ㄧ┖闂翠笉澶?鍒欏敖鑳藉姏鐨勬嫹璐?榪斿洖鎴愬姛鎷瘋礉鐨勫ぇ灝?濡傜紦瀛樺尯鍙敤絀洪棿涓?,鍒欒繑鍥?1,鍚屾椂璁劇疆errno涓? EAGAIN.


linux涓嬪彲鐢?span style="line-height: normal; color: #cc3333;">sysctl -a | grep net.ipv4.tcp_wmem鏌ョ湅緋葷粺榛? 璁ょ殑鍙戦佺紦瀛樺ぇ灝?

net.ipv4.tcp_wmem = 4096 16384 81920
榪? 鏈変笁涓?絎竴涓兼槸socket鐨勫彂閫佺紦瀛樺尯鍒嗛厤鐨勬渶灝戝瓧鑺傛暟,絎簩涓兼槸榛樿鍊?璇ュ間細琚玭et.core.wmem_default瑕嗙洊),緙撳瓨鍖? 鍦ㄧ郴緇熻礋杞戒笉閲嶇殑鎯呭喌涓嬪彲浠ュ闀垮埌榪欎釜鍊?絎笁涓兼槸鍙戦佺紦瀛樺尯絀洪棿鐨勬渶澶у瓧鑺傛暟(璇ュ間細琚玭et.core.wmem_max瑕嗙洊).
鏍規(guī)嵁瀹為檯嫻嬭瘯, 濡傛灉鎵嬪伐鏇存敼浜唍et.ipv4.tcp_wmem鐨勫?鍒欎細鎸夋洿鏀圭殑鍊兼潵榪愯,鍚﹀垯鍦ㄩ粯璁ゆ儏鍐典笅,鍗忚鏍堥氬父鏄寜 net.core.wmem_default鍜宯et.core.wmem_max鐨勫兼潵鍒嗛厤鍐呭瓨鐨?

搴旂敤紼嬪簭搴旇鏍規(guī)嵁搴旂敤鐨勭壒鎬у湪紼嬪簭涓洿鏀瑰彂閫佺紦瀛樺ぇ灝?

socklen_t sendbuflen = 0;
socklen_t len = sizeof(sendbuflen);
getsockopt(clientSocket, SOL_SOCKET, SO_SNDBUF, (void*)&sendbuflen, &len);
printf("default,sendbuf:%d\n", sendbuflen);

sendbuflen = 10240;
setsockopt(clientSocket, SOL_SOCKET, SO_SNDBUF, (void*)&sendbuflen, len);
getsockopt(clientSocket, SOL_SOCKET, SO_SNDBUF, (void*)&sendbuflen, &len);
printf("now,sendbuf:%d\n", sendbuflen);


闇瑕佹敞鎰忕殑鏄?铏界劧灝嗗彂閫佺紦瀛樿緗? 鎴愪簡10k,浣嗗疄闄呬笂,鍗忚鏍堜細灝嗗叾鎵╁ぇ1鍊?璁句負20k.


-------------------瀹? 渚嬪垎鏋?---------------------


鍦? 瀹為檯搴旂敤涓?濡傛灉鍙戦佺鏄潪闃誨鍙戦?鐢變簬緗戠粶鐨勯樆濉炴垨鑰呮帴鏀剁澶勭悊榪囨參,閫氬父鍑虹幇鐨勬儏鍐墊槸,鍙戦佸簲鐢ㄧ▼搴忕湅璧鋒潵鍙戦佷簡10k鐨勬暟鎹?浣嗘槸鍙彂閫佷簡2k鍒? 瀵圭緙撳瓨涓?榪樻湁8k鍦ㄦ湰鏈虹紦瀛樹腑(鏈彂閫佹垨鑰呮湭寰楀埌鎺ユ敹绔殑紜).閭d箞姝ゆ椂,鎺ユ敹搴旂敤紼嬪簭鑳藉鏀跺埌鐨勬暟鎹負2k.鍋囧鎺ユ敹搴旂敤紼嬪簭璋冪敤recv鍑芥暟鑾? 鍙栦簡1k鐨勬暟鎹湪澶勭悊,鍦ㄨ繖涓灛闂?鍙戠敓浜嗕互涓嬫儏鍐典箣涓,鍙屾柟琛ㄧ幇涓?

A. 鍙戦佸簲鐢ㄧ▼搴忚涓簊end瀹屼簡10k鏁版嵁,鍏抽棴浜唖ocket:
鍙? 閫佷富鏈轟綔涓簍cp鐨勪富鍔ㄥ叧闂?榪炴帴灝嗗浜嶧IN_WAIT1鐨勫崐鍏抽棴鐘舵?絳夊緟瀵規(guī)柟鐨刟ck),騫朵笖,鍙戦佺紦瀛樹腑鐨?k鏁版嵁騫朵笉娓呴櫎,渚濈劧浼氬彂閫佺粰瀵? 绔?濡傛灉鎺ユ敹搴旂敤紼嬪簭渚濈劧鍦╮ecv,閭d箞瀹冧細鏀跺埌浣欎笅鐨?k鏁版嵁(榪欎釜鍓嶉鏄?鎺ユ敹绔細鍦ㄥ彂閫佺FIN_WAIT1鐘舵佽秴鏃跺墠鏀跺埌浣欎笅鐨?k鏁版嵁.), 鐒跺悗寰楀埌涓涓绔痵ocket琚叧闂殑娑堟伅(recv榪斿洖0).榪欐椂,搴旇榪涜鍏抽棴.

B. 鍙戦佸簲鐢ㄧ▼搴忓啀嬈¤皟鐢╯end鍙戦?k鐨勬暟鎹?
鍋? 濡傚彂閫佺紦瀛樼殑絀洪棿涓?0k,閭d箞鍙戦佺紦瀛樺彲鐢ㄧ┖闂翠負20-8=12k,澶т簬璇鋒眰鍙戦佺殑8k,鎵浠end鍑芥暟灝嗘暟鎹仛鎷瘋礉鍚?騫剁珛鍗寵繑鍥?192;

鍋? 濡傚彂 閫佺紦瀛樼殑絀洪棿涓?2k,閭d箞姝ゆ椂鍙戦佺紦瀛樺彲鐢ㄧ┖闂磋繕鏈?2-8=4k,send()浼氳繑鍥?096,搴旂敤紼嬪簭鍙戠幇榪斿洖鐨勫煎皬浜庤姹傚彂閫佺殑澶у皬鍊煎悗,鍙互璁? 涓虹紦瀛樺尯宸叉弧,榪欐椂蹇呴』闃誨(鎴栭氳繃select絳夊緟涓嬩竴嬈ocket鍙啓鐨勪俊鍙?,濡傛灉搴旂敤紼嬪簭涓嶇悊浼?绔嬪嵆鍐嶆璋冪敤send,閭d箞浼氬緱鍒?1鐨勫? 鍦╨inux涓嬭〃鐜頒負errno=EAGAIN.


C. 鎺ユ敹搴旂敤紼嬪簭鍦ㄥ鐞嗗畬1k鏁版嵁鍚?鍏抽棴浜唖ocket:
鎺? 鏀朵富鏈轟綔涓轟富鍔ㄥ叧闂?榪炴帴灝嗗浜嶧IN_WAIT1鐨勫崐鍏抽棴鐘舵?絳夊緟瀵規(guī)柟鐨刟ck).鐒跺悗,鍙戦佸簲鐢ㄧ▼搴忎細鏀跺埌socket鍙鐨勪俊鍙?閫氬父鏄? select璋冪敤榪斿洖socket鍙),浣嗗湪璇誨彇鏃朵細鍙戠幇recv鍑芥暟榪斿洖0,榪欐椂搴旇璋冪敤close鍑芥暟鏉ュ叧闂璼ocket(鍙戦佺粰瀵規(guī)柟ack);

濡? 鏋滃彂閫佸簲鐢ㄧ▼搴忔病鏈夊鐞嗚繖涓彲璇葷殑淇″彿,鑰屾槸鍦╯end,閭d箞榪欒鍒嗕袱縐嶆儏鍐墊潵鑰冭檻,鍋囧鏄湪鍙戦佺鏀跺埌RST鏍囧織涔嬪悗璋冪敤send,send灝嗚繑鍥? -1,鍚屾椂errno璁句負ECONNRESET琛ㄧず瀵圭緗戠粶宸叉柇寮,
浣嗘槸,涔熸湁璇存硶鏄繘紼嬩細鏀跺埌SIGPIPE淇″彿, 璇ヤ俊鍙風(fēng)殑榛樿鍝嶅簲鍔ㄤ綔鏄鍑鴻繘紼?濡傛灉蹇界暐璇ヤ俊鍙?閭d箞send鏄繑鍥?1,errno涓篍PIPE(鏈瘉瀹?;濡傛灉鏄湪鍙戦佺鏀跺埌RST鏍囧織涔嬪墠,鍒檚end鍍忓線甯鎬竴鏍峰伐浣?

浠ヤ笂璇寸殑鏄潪闃誨鐨? send鎯呭喌,鍋囧send鏄樆濉炶皟鐢?騫朵笖姝eソ澶勪簬闃誨鏃?渚嬪涓嬈℃у彂閫佷竴涓法澶х殑buf,瓚呭嚭浜嗗彂閫佺紦瀛?,瀵圭socket鍏抽棴,閭d箞send灝? 榪斿洖鎴愬姛鍙戦佺殑瀛楄妭鏁?濡傛灉鍐嶆璋冪敤send,閭d箞浼氬悓涓婁竴鏍?

D. 浜ゆ崲鏈烘垨璺敱鍣ㄧ殑緗戠粶鏂紑:
鎺ユ敹搴旂敤紼嬪簭鍦ㄥ鐞嗗畬宸叉敹鍒扮殑1k鏁版嵁鍚?浼氱戶緇粠緙撳瓨鍖鴻 鍙栦綑涓嬬殑1k鏁版嵁,鐒跺悗灝辮〃鐜頒負鏃犳暟鎹彲璇葷殑鐜拌薄,榪欑鎯呭喌闇瑕佸簲鐢ㄧ▼搴忔潵澶勭悊瓚呮椂.涓鑸仛娉曟槸璁懼畾涓涓猻elect絳夊緟鐨勬渶澶ф椂闂?濡傛灉瓚呭嚭榪欎釜鏃墮棿渚? 鐒舵病鏈夋暟鎹彲璇?鍒欒涓簊ocket宸蹭笉鍙敤.

鍙? 閫佸簲鐢ㄧ▼搴忎細涓嶆柇鐨勫皢浣欎笅鐨勬暟鎹彂閫佸埌緗戠粶涓?浣嗗緇堝緱涓嶅埌紜,鎵浠ョ紦瀛樺尯鐨勫彲鐢ㄧ┖闂存寔緇負0,榪欑鎯呭喌涔熼渶瑕佸簲鐢ㄧ▼搴忔潵澶勭悊.

濡傛灉涓嶇敱搴旂敤紼嬪簭鏉ュ鐞嗚繖縐嶆儏鍐佃秴鏃剁殑鎯呭喌,涔熷彲浠ラ氳繃tcp鍗忚鏈韓鏉ュ鐞?鍏蜂綋鍙互鏌? 鐪媠ysctl欏逛腑鐨?
net.ipv4.tcp_keepalive_intvl
net.ipv4.tcp_keepalive_probes
net.ipv4.tcp_keepalive_time

 鍘熸枃鍦板潃 http://xufish.blogbus.com/logs/40537344.html

from:
http://blog.chinaunix.net/u2/67780/showart_2056353.html


chatler 2010-07-07 17:14 鍙戣〃璇勮
]]>
鏁欎綘鐢╟瀹炵幇http鍗忚http://m.shnenglu.com/beautykingdom/archive/2010/06/27/118839.htmlchatlerchatlerSun, 27 Jun 2010 15:16:00 GMThttp://m.shnenglu.com/beautykingdom/archive/2010/06/27/118839.htmlhttp://m.shnenglu.com/beautykingdom/comments/118839.htmlhttp://m.shnenglu.com/beautykingdom/archive/2010/06/27/118839.html#Feedback0http://m.shnenglu.com/beautykingdom/comments/commentRss/118839.htmlhttp://m.shnenglu.com/beautykingdom/services/trackbacks/118839.html澶у閮藉緢鐔熸?zhèn)塇TTP鍗忚鐨勫簲鐢紝鍥犱負姣忓ぉ閮藉湪緗戠粶涓婃祻瑙堢潃涓嶅皯涓滆タ錛屼篃閮界煡閬撴槸HTTP鍗忚鏄浉褰撶畝鍗曠殑銆傛瘡嬈$敤 thunder涔嬬被鐨勪笅杞借蔣浠朵笅杞界綉欏碉紝褰撶敤鍒伴偅涓?#8220;鐢╰hunder涓嬭澆鍏ㄩ儴閾炬帴”鏃舵昏寰楀緢紲炲銆?br> 鍚庢潵鎯蟲兂錛屽叾瀹炶瀹炵幇榪欎簺涓嬭澆鍔熻兘涔熷茍涓嶉毦錛屽彧瑕佹寜鐓TTP鍗忚鍙戦乺equest錛岀劧鍚庡鎺ユ敹鍒扮殑鏁版嵁榪涜鍒嗘瀽錛屽鏋滈〉闈笂榪樻湁href涔嬬被鐨勯摼鎺ユ寚 鍚戞爣蹇楀氨鍙互榪涜娣變竴灞傜殑涓嬭澆浜嗐侶TTP鍗忚鐩墠鐢ㄧ殑鏈澶氱殑鏄?.1 鐗堟湰錛岃鍏ㄩ潰閫忓交鍦版悶鎳傚畠?yōu)鍙傝僐FC2616鏂囨。鍚с傛垜鏄時fc鏂囨。浜嗙殑,瑕佺湅鑷繁鍘葷湅鍚_^
婧愪唬鐮佸涓嬶細
/******* http瀹㈡埛绔▼搴?httpclient.c ************/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <errno.h>
#include <unistd.h>
#include <netinet/in.h>
#include <limits.h>
#include <netdb.h>
#include <arpa/inet.h>
#include <ctype.h>

//////////////////////////////httpclient.c 寮濮?//////////////////////////////////////////


/********************************************
鍔熻兘錛氭悳绱㈠瓧絎︿覆鍙寵竟璧風(fēng)殑絎竴涓尮閰嶅瓧絎?br> ********************************************/
char * Rstrchr(char * s, char x) {
  int i = strlen(s);
  if(!(*s)) return 0;
  while(s[i-1]) if(strchr(s + (i - 1), x)) return (s + (i - 1)); else i--;
  return 0;
}

/********************************************
鍔熻兘錛氭妸瀛楃涓茶漿鎹負鍏ㄥ皬鍐?br> ********************************************/
void ToLowerCase(char * s) {
  while(s && *s) {*s=tolower(*s);s++;}
}

/**************************************************************
鍔熻兘錛氫粠瀛楃涓瞫rc涓垎鏋愬嚭緗戠珯鍦板潃鍜岀鍙o紝騫跺緱鍒扮敤鎴瘋涓嬭澆鐨勬枃浠?br> ***************************************************************/
void GetHost(char * src, char * web, char * file, int * port) {
  char * pA;
  char * pB;
  memset(web, 0, sizeof(web));
  memset(file, 0, sizeof(file));
  *port = 0;
  if(!(*src)) return;
  pA = src;
  if(!strncmp(pA, "http://", strlen("http://"))) pA = src+strlen("http://");
  else if(!strncmp(pA, "https://", strlen("https://"))) pA = src+strlen("https://");
  pB = strchr(pA, '/');
  if(pB) {
    memcpy(web, pA, strlen(pA) - strlen(pB));
    if(pB+1) {
      memcpy(file, pB + 1, strlen(pB) - 1);
      file[strlen(pB) - 1] = 0;
    }
  }
  else memcpy(web, pA, strlen(pA));
  if(pB) web[strlen(pA) - strlen(pB)] = 0;
  else web[strlen(pA)] = 0;
  pA = strchr(web, ':');
  if(pA) *port = atoi(pA + 1);
  else *port = 80;
}


int main(int argc, char *argv[])
{
  int sockfd;
  char buffer[1024];
  struct sockaddr_in server_addr;
  struct hostent *host;
  int portnumber,nbytes;
  char host_addr[256];
  char host_file[1024];
  char local_file[256];
  FILE * fp;
  char request[1024];
  int send, totalsend;
  int i;
  char * pt;

  if(argc!=2)
  {
    fprintf(stderr,"Usage:%s web-address\a\n",argv[0]);
    exit(1);
  }
  printf("parameter.1 is: %s\n", argv[1]);
  ToLowerCase(argv[1]);/*灝嗗弬鏁拌漿鎹負鍏ㄥ皬鍐?/
  printf("lowercase parameter.1 is: %s\n", argv[1]);

  GetHost(argv[1], host_addr, host_file, &portnumber);/*鍒嗘瀽緗戝潃銆佺鍙c佹枃浠跺悕絳?/
  printf("webhost:%s\n", host_addr);
  printf("hostfile:%s\n", host_file);
  printf("portnumber:%d\n\n", portnumber);

  if((host=gethostbyname(host_addr))==NULL)/*鍙栧緱涓繪満IP鍦板潃*/
  {
    fprintf(stderr,"Gethostname error, %s\n", strerror(errno));
    exit(1);
  }

  /* 瀹㈡埛紼嬪簭寮濮嬪緩绔?sockfd鎻忚堪絎?*/
  if((sockfd=socket(AF_INET,SOCK_STREAM,0))==-1)/*寤虹珛SOCKET榪炴帴*/
  {
    fprintf(stderr,"Socket Error:%s\a\n",strerror(errno));
    exit(1);
  }

  /* 瀹㈡埛紼嬪簭濉厖鏈嶅姟绔殑璧勬枡 */
  bzero(&server_addr,sizeof(server_addr));
  server_addr.sin_family=AF_INET;
  server_addr.sin_port=htons(portnumber);
  server_addr.sin_addr=*((struct in_addr *)host->h_addr);

  /* 瀹㈡埛紼嬪簭鍙戣搗榪炴帴璇鋒眰 */
  if(connect(sockfd,(struct sockaddr *)(&server_addr),sizeof(struct sockaddr))==-1)/*榪炴帴緗戠珯*/
  {
    fprintf(stderr,"Connect Error:%s\a\n",strerror(errno));
    exit(1);
  }

  sprintf(request, "GET /%s HTTP/1.1\r\nAccept: */*\r\nAccept-Language: zh-cn\r\n\
User-Agent: Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)\r\n\
Host: %s:%d\r\nConnection: Close\r\n\r\n", host_file, host_addr, portnumber);
  printf("%s", request);/*鍑嗗request錛屽皢瑕佸彂閫佺粰涓繪満*/

  /*鍙栧緱鐪熷疄鐨勬枃浠跺悕*/
  if(host_file && *host_file) pt = Rstrchr(host_file, '/');
  else pt = 0;

  memset(local_file, 0, sizeof(local_file));
  if(pt && *pt) {
    if((pt + 1) && *(pt+1)) strcpy(local_file, pt + 1);
    else memcpy(local_file, host_file, strlen(host_file) - 1);
  }
  else if(host_file && *host_file) strcpy(local_file, host_file);
  else strcpy(local_file, "index.html");
  printf("local filename to write:%s\n\n", local_file);

  /*鍙戦乭ttp璇鋒眰request*/
  send = 0;totalsend = 0;
  nbytes=strlen(request);
  while(totalsend < nbytes) {
    send = write(sockfd, request + totalsend, nbytes - totalsend);
    if(send==-1) {printf("send error!%s\n", strerror(errno));exit(0);}
    totalsend+=send;
    printf("%d bytes send OK!\n", totalsend);
  }

  fp = fopen(local_file, "a");
  if(!fp) {
    printf("create file error! %s\n", strerror(errno));
    return 0;
  }
  printf("\nThe following is the response header:\n");
  i=0;
  /* 榪炴帴鎴愬姛浜嗭紝鎺ユ敹http鍝嶅簲錛宺esponse */
  while((nbytes=read(sockfd,buffer,1))==1)
  {
    if(i < 4) {
      if(buffer[0] == '\r' || buffer[0] == '\n') i++;
      else i = 0;
      printf("%c", buffer[0]);/*鎶奾ttp澶翠俊鎭墦鍗板湪灞忓箷涓?/
    }
    else {
      fwrite(buffer, 1, 1, fp);/*灝唄ttp涓諱綋淇℃伅鍐欏叆鏂囦歡*/
      i++;
      if(i%1024 == 0) fflush(fp);/*姣?K鏃跺瓨鐩樹竴嬈?/
    }
  }
  fclose(fp);
  /* 緇撴潫閫氳 */
  close(sockfd);
  exit(0);
}


zj@zj:~/C_pram/practice/http_client$ ls
httpclient  httpclient.c
zj@zj:~/C_pram/practice/http_client$ ./httpclient http://www.baidu.com/
parameter.1 is: http://www.baidu.com/
lowercase parameter.1 is: http://www.baidu.com/
webhost:www.baidu.com
hostfile:
portnumber:80

GET / HTTP/1.1
Accept: */*
Accept-Language: zh-cn
User-Agent: Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)
Host: www.baidu.com:80
Connection: Close

local filename to write:index.html

163 bytes send OK!

The following is the response header:
HTTP/1.1 200 OK
Date: Wed, 29 Oct 2008 10:41:40 GMT
Server: BWS/1.0
Content-Length: 4216
Content-Type: text/html
Cache-Control: private
Expires: Wed, 29 Oct 2008 10:41:40 GMT
Set-Cookie: BAIDUID=A93059C8DDF7F1BC47C10CAF9779030E:FG=1; expires=Wed, 29-Oct-38 10:41:40 GMT; path=/; domain=.baidu.com
P3P: CP=" OTI DSP COR IVA OUR IND COM "

zj@zj:~/C_pram/practice/http_client$ ls
httpclient  httpclient.c  index.html

涓嶆寚瀹氭枃浠跺悕瀛楃殑璇?榛樿灝辨槸涓嬭澆緗戠珯榛樿鐨勯欏典簡^_^.

from:
http://blog.chinaunix.net/u2/76292/showart_1353805.html



chatler 2010-06-27 23:16 鍙戣〃璇勮
]]>
c璇█鎶撳彇緗戦〉鏁版嵁http://m.shnenglu.com/beautykingdom/archive/2010/06/27/118838.htmlchatlerchatlerSun, 27 Jun 2010 15:13:00 GMThttp://m.shnenglu.com/beautykingdom/archive/2010/06/27/118838.htmlhttp://m.shnenglu.com/beautykingdom/comments/118838.htmlhttp://m.shnenglu.com/beautykingdom/archive/2010/06/27/118838.html#Feedback0http://m.shnenglu.com/beautykingdom/comments/commentRss/118838.htmlhttp://m.shnenglu.com/beautykingdom/services/trackbacks/118838.html#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netdb.h>

#define HTTPPORT 80


char* head =
     "GET /u2/76292/ HTTP/1.1\r\n"
     "Accept: */*\r\n"
     "Accept-Language: zh-cn\r\n"
     "Accept-Encoding: gzip, deflate\r\n"
     "User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; CIBA; TheWorld)\r\n"
     "Host:blog.chinaunix.net\r\n"
     "Connection: Keep-Alive\r\n\r\n";

int connect_URL(char *domain,int port)
{
    int sock;
    struct hostent * host;
    struct sockaddr_in server;
    host = gethostbyname(domain);
    if (host == NULL)
     {
      printf("gethostbyname error\n");
      return -2;
     }
   // printf("HostName: %s\n",host->h_name);

   // printf("IP Address: %s\n",inet_ntoa(*((struct in_addr *)host->h_addr)));

    sock = socket(AF_INET,SOCK_STREAM,0);
    if (sock < 0)
    {
      printf("invalid socket\n");
      return -1;
    }
    memset(&server,0,sizeof(struct sockaddr_in));
    memcpy(&server.sin_addr,host->h_addr_list[0],host->h_length);
    server.sin_family = AF_INET;
    server.sin_port = htons(port);
    return (connect(sock,(struct sockaddr *)&server,sizeof(struct sockaddr)) <0) ? -1 : sock;
}


int main()
{
  int sock;
  char buf[100];
  char *domain = "blog.chinaunix.net";

  
  fp = fopen("test.txt","rb");
  if(NULL == fp){
    printf("can't open stockcode file!\n");
    return -1;
  }
  

    sock = connect_URL(domain,HTTPPORT);
    if (sock <0){
       printf("connetc err\n");
       return -1;
        }

    send(sock,head,strlen(head),0);

    while(1)
    {
      if((recv(sock,buf,100,0))<1)
        break;
      fprintf(fp,"%s",bufp); //save http data

      }
    
    fclose(fp);
    close(sock);
  
  printf("bye!\n");
  return 0;
}

 

鎴戣繖閲屾槸淇濆瓨鏁版嵁鍒版湰鍦扮‖鐩?鍙互鍦ㄨ繖涓殑鍩虹涓婁慨鏀?head澶寸殑瀹氫箟鍙互鑷繁浣跨敤wireshark鎶撳寘鏉ョ湅

from:
http://blog.chinaunix.net/u2/76292/showart.php?id=2123108



chatler 2010-06-27 23:13 鍙戣〃璇勮
]]>
TCP鐨勬祦閲忔帶鍒?http://m.shnenglu.com/beautykingdom/archive/2010/01/08/105213.htmlchatlerchatlerFri, 08 Jan 2010 15:34:00 GMThttp://m.shnenglu.com/beautykingdom/archive/2010/01/08/105213.htmlhttp://m.shnenglu.com/beautykingdom/comments/105213.htmlhttp://m.shnenglu.com/beautykingdom/archive/2010/01/08/105213.html#Feedback0http://m.shnenglu.com/beautykingdom/comments/commentRss/105213.htmlhttp://m.shnenglu.com/beautykingdom/services/trackbacks/105213.html1. 鍓嶈█
 
TCP鏄叿澶囨祦鎺у拰鍙潬榪炴帴鑳藉姏鐨勫崗璁紝涓洪槻姝CP鍙戠敓鎷ュ鎴栦負鎻愰珮浼犺緭鏁堢巼錛屽湪緗?br>緇滃彂灞曟棭鏈熷氨鎻愬嚭浜嗕竴浜涚浉鍏崇殑TCP嫻佹帶鍜屼紭鍖栫畻娉曪紝鑰屼笖涔熻RFC2581瑙勫畾鏄瘡涓?br>TCP瀹炵幇鏃惰瀹炵幇鐨勩?/div>
 
鏈枃涓紝涓烘眰鏂逛究鎶婂皢“TCP鍒嗙粍孌?segment)”閮界洿鎺ョО涓?#8220;鍖?#8221;銆?/div>
 
2. 鎱㈠惎鍔?slow start)鍜屾嫢濉為伩鍏?Congestion Avoidance)
 
鎱㈠惎鍔ㄥ拰鎷ュ閬垮厤鏄睘浜嶵CP鍙戦佹柟蹇呴』(MUST)瑕佸疄鐜扮殑錛岄槻姝CP鍙戦佹柟鍚戠綉緇滀紶鍏ュぇ閲忕殑紿佸彂鏁版嵁閫犳垚緗戠粶闃誨銆?/div>

鍏堜粙緇嶅嚑涓浉鍏沖弬鏁幫紝鏄湪閫氫俊鍙屾柟涓渶瑕佽冭檻浣嗕笉鍦═CP鍖呬腑浣撶幇鐨勪竴浜涘弬鏁幫細

鎷ュ紿楀彛(congestion window錛宑wnd)錛屾槸鎸囧彂閫佹柟鍦ㄦ帴鏀跺埌瀵規(guī)柟鐨凙CK紜鍓嶅悜鍏佽緗戠粶鍙戦佺殑鏁版嵁閲忥紝鏁版嵁鍙戦佸悗錛屾嫢濉炵獥鍙g緝?yōu)畯锛涙帴鏀跺埌瀵规柟鐨凙CK鍚庯紝鎷ュ紿楀彛鐩稿簲澧炲姞錛屾嫢濉炵獥鍙h秺澶э紝鍙彂閫佺殑鏁版嵁閲忚秺澶с?/strong>鎷ュ紿楀彛鍒濆鍊肩殑RFC2581涓瑙勫畾涓轟笉瓚呰繃鍙戦佹柟MSS鐨勪袱鍊嶏紝鑰屼笖涓嶈兘瓚呰繃涓や釜TCP鍖咃紝鍦≧FC3390涓洿鏂頒簡鍒濆紿楀彛澶у皬鐨勮緗柟娉曘?/div>

閫氬憡紿楀彛(advertised window錛宺wnd)錛屾槸鎸囨帴鏀舵柟鎵鑳芥帴鏀剁殑娌℃潵寰楀強鍙慉CK紜鐨勬暟鎹噺錛屾帴鏀舵柟鏁版嵁鎺ユ敹鍚庯紝閫氬憡紿楀彛緙╁皬錛涘彂閫丄CK鍚庯紝閫氬憡紿楀彛鐩稿簲鎵╁ぇ銆?/strong>

鎱㈠惎鍔ㄩ槇鍊?slow start threshold, ssthresh)錛岀敤鏉ュ垽鏂槸鍚﹁浣跨敤鎱㈠惎鍔ㄦ垨鎷ュ閬垮厤綆楁硶鏉ユ帶鍒舵祦閲忕殑涓涓弬鏁幫紝涔熸槸闅忛氫俊榪囩▼涓嶆柇鍙樺寲鐨勩?/div>

褰揷wnd < ssthresh鏃訛紝鎷ュ紿楀彛鍊煎凡緇忔瘮杈冨皬浜嗭紝琛ㄧず鏈粡紜鐨勬暟鎹噺澧炲ぇ錛岄渶瑕佸惎鍔ㄦ參鍚姩綆楁硶錛涘綋cwnd > ssthresh鏃訛紝鍙彂閫佹暟鎹噺澶э紝闇瑕佸惎鍔ㄦ嫢濉為伩鍏嶇畻娉曘?/div>

鎷ュ紿楀彛cwnd鏄牴鎹彂閫佺殑鏁版嵁閲忚嚜鍔ㄥ噺灝忕殑錛屼絾鎵╁ぇ灝遍渶瑕佹牴鎹鏂圭殑鎺ユ敹鎯呭喌榪涜鎵╁ぇ錛屾參鍚姩鍜屾嫢濉為伩鍏嶇畻娉曢兘鏄弿榪板浣曟墿澶ц鍊肩殑銆?/strong>

鍦ㄥ惎鍔ㄦ參鍚姩綆楁硶鏃訛紝TCP鍙戦佹柟鎺ユ敹鍒板鏂圭殑ACK鍚庢嫢濉炵獥鍙f渶澶氭瘡嬈″鍔犱竴涓彂閫佹柟MSS瀛楄妭鐨勬暟鍊鹼紝褰撴嫢濉炵獥鍙h秴榪噑shresh鍚庢垨瑙傚療鍒版嫢濉炴墠鍋滄綆楁硶銆?/div>

鍚姩鎷ュ閬垮厤綆楁硶鏃訛紝鎷ュ紿楀彛鍦ㄤ竴涓繛鎺ュ線榪旀椂闂碦TT鍐呭鍔犱竴涓渶澶CP鍖呴暱搴︾殑閲忥紝涓鑸疄鐜版椂鐢ㄤ互涓嬪叕寮忚綆楋細
      cwnd += max(SMSS*SMSS/cwnd, 1)            錛?.1)
SMSS涓哄彂閫佹柟MSS銆?/div>

TCP鍙戦佹柟媯嫻嬪埌鏁版嵁鍖呬涪澶辨椂錛岄渶瑕佽皟鏁磗sthresh錛屼竴鑸寜涓嬮潰鍏紡璁$畻錛?/div>
      ssthresh = max (FlightSize / 2, 2*SMSS)    (2.2)
鍏朵腑FlightSize琛ㄧず宸茬粡鍙戦佷絾榪樻病鏈夎紜鐨勬暟鎹噺銆?/div>
 
3. 蹇熼噸浼?fast retransmit)鍜屽揩閫熸仮澶?fast recovery)

TCP鎺ユ敹鏂規(guī)敹鍒伴敊搴忕殑TCP鍖呮椂瑕佸彂閫佸鍒剁殑ACK鍖呭洖搴旓紝鎻愮ず鍙戦佹柟鍙兘鍑虹幇緗戠粶涓㈠寘錛涘彂閫佹柟
鏀跺埌榪炵畫3涓噸澶嶇殑ACK鍖呭悗鍚姩蹇熼噸浼犵畻娉曪紝鏍規(guī)嵁紜鍙峰揩閫熼噸浼犻偅涓彲鑳戒涪澶辯殑鍖呰屼笉蹇呯瓑
閲嶄紶瀹氭椂鍣ㄨ秴鏃跺悗鍐嶉噸浼狅紝鏅氱殑閲嶄紶鏄絳夊埌閲嶄紶瀹氭椂鍣ㄨ秴鏃惰繕娌℃敹鍒癆CK鎵嶈繘琛岀殑銆傝繖涓畻
娉曟槸TCP鍙戦佹柟搴旇(SHOULD)瀹炵幇鐨勶紝涓嶆槸蹇呴』銆俆CP鍙戦佹柟榪涜浜嗗揩閫熼噸浼犲悗榪涘叆蹇熸仮澶嶉樁孌?br>錛岀洿鍒版病鍐嶆帴鏀墮噸澶嶇殑ACK鍖呫?/div>

蹇熼噸浼犲拰蹇熸仮澶嶅叿浣撹繃紼嬩負錛?br>1. 褰撴敹鍒扮3涓噸澶嶇殑ACK鍖呮椂錛宻sthreh鍊兼寜鍏紡2.2閲嶆柊璁劇疆錛?/div>
2. 閲嶄紶涓㈠け鐨勫寘鍚庯紝灝嗘嫢濉炵獥鍙wnd璁劇疆涓簊shresh+3*SMSS錛屼漢宸ユ墿澶т簡鎷ュ紿楀彛錛?/div>
3. 瀵逛簬姣忎釜鎺ユ敹鍒扮殑閲嶅鐨凙CK鍖咃紝cwnd鐩稿簲澧炲姞SMSS錛屾墿澶ф嫢濉炵獥鍙o紱
4. 濡傛灉鏂扮殑鎷ュ紿楀彛cwnd鍊煎拰鎺ユ敹鏂圭殑閫氬憡紿楀彛鍊煎厑璁哥殑璇濓紝鍙互緇х畫鍙戞柊鍖咃紱
5. 褰撴敹鍒頒笅涓涓狝CK紜浜嗘柊鏁版嵁鏃訛紝灝哻wnd澶у皬璋冩暣涓簊shresh錛屽噺灝戠獥鍙o紱瀵規(guī)帴鏀舵柟
   鏉ヨ錛屾帴鏀跺埌閲嶅彂鐨凾CP鍖呭悗灝辮鍙戞ACK紜褰撳墠鎺ユ敹鐨勬暟鎹?/div>
 
4. 緇撹
榪欎簺綆楁硶閲嶇偣鍦ㄤ簬淇濇寔緗戠粶鐨勫彲闈犳у拰鍙敤鎬э紝闃叉緗戠粶闃誨閫犳垚鐨勭綉緇滃穿婧冿紝鏄浉瀵?br>姣旇緝淇濆畧鐨勩?/div>

5. 闄勫綍璁ㄨ

A鍚? 榪欎簺綆楁硶閮芥槸閽堝閫氫俊鍙屾柟鐨勪簨, 浣嗗鏋滀粠寮鍙戦槻鐏絳変腑闂磋澶囩殑瑙掑害鏉ョ湅,
     涓棿璁懼鏈夊繀瑕佽冭檻榪欎簺涔?
绔湪: 榪欎釜...鎴戝ソ璞′篃鐪嬩笉鍑哄繀瑕佹э紝鍥犱負綆楁硶鐨勫弬鏁伴兘鏄湪鍙屾柟鍐呴儴鑰屼笉鍦═CP鏁版嵁鍖?br>      涓綋鐜?..浣嗗簲璇ヤ細璁╀腑闂磋澶囪交鏉劇偣錛岃繖涓氨璞″湪椹礬寮杞︼紝榪欎簺綆楁硶灝辨槸浜よ
      璁╀綘寮寰楄鐭╃偣錛屼氦璀﹀彧鍏沖績浣犲紑杞︾殑鎯呭喌錛岃屼笉綆′綘寮鐨勬槸浠涔堣濺錛屽紑寰楀ソ浜よ
      涔熻交鏉俱傚ソ杞﹀彲浠ヨ浣犲緢瀹規(guī)槗寮濂斤紝浣嗗樊杞︿篃鍙互寮濂姐?/div>

A鍚? 榪欎簺綆楁硶鍘熷瀷鎻愬嚭涔熷緢鏃╀簡, 鏈鏃╂槸88騫寸殑浜? 褰撴椂緗戠粶閮藉浜庡垵綰ч樁孌? 鏈変釜
     9600bps鐨勭尗灝卞緢鐗涗簡, 璁$畻鏈烘ц兘涔熷緢宸? 鍥犳瀹炴柦榪欎簺綆楁硶榪樻湁鐐圭敤; 浣嗙幇
     鍦ㄨ繃浜嗗揩20騫翠簡, 鐧懼厗閮藉揩娣樻卑, 鍗冨厗, 涓囧厗緗戠粶閮藉揩鏅強浜? 鍗充嬌PC鏈虹殑鍐呭瓨
     涔熼兘涓奊浜?鍐嶈鐭╄繖縐嶅嚑K綰у埆鐨勬暟鎹噺鏈夋剰鎬濅箞? 灝卞ソ璞$幇鍦ㄥ柗姘斿紡鎴樻枟鏈洪兘鍒?/div>
     絎?浠d簡, 鍐嶇爺絀惰灪鏃嬫〃鎴樻枟鏈鴻繕鏈夋剰鎬濅箞?
绔湪: 榪欎釜...榪欎釜灝辮薄鐥呮瘨搴撲簡, 閲岄潰涓嶄篃鏈夋棤鏁扮殑DOS鏃朵唬鐨勭梾姣? 浣犱互鍚庤繖杈堝瓙浼拌
      閮借涓嶇潃鐨勶紝浣嗘病鏈夊摢涓槻鐥呮瘨鍘傚晢浼氭妸榪欎簺鐥呮瘨浠庡簱涓墧闄わ紝搴撴槸鍙涓嶅噺鐨勩?br>      鏈夎繖涔堜釜涓滆タ涔熸槸涓鏍鳳紝姝e洜涓哄鉤鏃舵病鐢紝璋佷篃涓嶆敞鎰忥紝鐭ラ亾浜嗗氨鍙互鍚逛竴鍚癸紝
      灝ゅ叾鎷垮幓鍞敩浜烘槸寰堟湁鏁堢殑錛?/div>

A鍚? 浣犵湡鏃犺亰!
绔湪: You got it! 涓嶆棤鑱婂共鍚楀啓鍗氬鍟?

绔湪: 鎼炴妧鏈湁鏃跺欐槸寰堟?zhèn)插搥鐨勪竴浠朵簨錛屽繀欏葷壍鎵竷澶у鍏ぇ濮ㄧ殑寰堝鑰佷笢瑗匡紝涔熷氨鏄悜涓?br>      鍏煎錛屽埌涓瀹氱▼搴﹀皢鎴愪負榪涗竴姝ュ彂灞曠殑鏈澶ч殰紕嶏紝璁蹭竴涓粠smth鐪嬪埌鐨勪笉鏄瑧璇?/div>
      鐨勭瑧璇濓細

    鐜頒唬閾佽礬鐨勯搧杞ㄩ棿璺濇槸4鑻卞昂8鐐?鑻卞錛岄搧杞ㄩ棿璺濋噰鐢ㄤ簡鐢?shù)铻R杞窛鐨勬爣鍑嗭紝鑰岀數(shù)杞﹁疆璺?br>鐨勬爣鍑嗗垯娌胯浜嗛┈杞︾殑杞窛鏍囧噯銆?
    椹濺鐨勮疆璺濅負浣曟槸4鑻卞昂8鐐?鑻卞錛熷師鏉ワ紝鑻卞浗鐨勯┈璺緳榪圭殑瀹藉害鏄?鑻卞昂8鐐?鑻卞銆?br>濡傛灉椹濺鏀圭敤鍏朵粬灝哄鐨勮疆璺濓紝杞瓙寰堝揩灝變細鍦ㄨ嫳鍥界殑鑰侀┈璺笂鎾炲潖銆?
    鑻卞浗椹礬鐨勮緳榪瑰搴﹀張浠庝綍鑰屾潵錛熻繖鍙互涓婃函鍒板彜緗楅┈鏃舵湡銆傛暣涓媧?鍖呮嫭鑻卞浗)鐨勮佽礬閮芥槸緗楅┈浜轟負鍏跺啗闃熼摵璁劇殑錛?鑻卞昂8鐐?鑻卞姝f槸緗楅┈鎴樿濺鐨勫搴︺?
    緗楅┈鎴樿濺鐨勫搴﹀張鏄庝箞鏉ョ殑錛熺瓟妗堝緢綆鍗曪紝瀹冩槸鐗靛紩涓杈嗘垬杞︾殑涓ゅ尮椹殑灞佽偂鐨勬誨搴︺?
    孌靛瓙鍒拌繖閲岃繕娌℃湁緇撴潫銆傜編鍥借埅澶╅鏈虹殑鐏鍔╂帹鍣ㄤ篃鎽嗚劚涓嶄簡椹眮鑲$殑綰犵紶鈥斺斺旂伀綆姪鎺ㄥ櫒閫犲ソ涔嬪悗瑕佺粡榪囬搧璺繍閫侊紝鑰岄搧璺笂蹇呯劧鏈変竴浜涢毀閬擄紝闅ч亾鐨勫搴﹀張鏄牴鎹搧杞ㄧ殑瀹藉害鑰屾潵銆備唬琛ㄧ潃灝栫縐戞妧鐨勭伀綆姪鎺ㄥ櫒鐨勫搴︼紝绔熺劧琚袱鍖歸┈鐨勫眮鑲$殑鎬誨搴﹀喅瀹氫簡銆?br>杞嚜錛?br>http://m.shnenglu.com/prayer/archive/2009/04/20/80527.html


chatler 2010-01-08 23:34 鍙戣〃璇勮
]]>嫻侀噺鎺у埗鍜屾嫢濉炴帶鍒?/title><link>http://m.shnenglu.com/beautykingdom/archive/2009/12/30/104460.html</link><dc:creator>chatler</dc:creator><author>chatler</author><pubDate>Wed, 30 Dec 2009 08:54:00 GMT</pubDate><guid>http://m.shnenglu.com/beautykingdom/archive/2009/12/30/104460.html</guid><wfw:comment>http://m.shnenglu.com/beautykingdom/comments/104460.html</wfw:comment><comments>http://m.shnenglu.com/beautykingdom/archive/2009/12/30/104460.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://m.shnenglu.com/beautykingdom/comments/commentRss/104460.html</wfw:commentRss><trackback:ping>http://m.shnenglu.com/beautykingdom/services/trackbacks/104460.html</trackback:ping><description><![CDATA[鎷ュ錛圕ongestion錛夋寚鐨勬槸鍦ㄥ寘浜ゆ崲緗戠粶涓敱浜庝紶閫佺殑鍖呮暟鐩お澶氾紝鑰屽瓨璐漿鍙戣妭鐐圭殑璧勬簮鏈夐檺鑰岄犳垚緗戠粶浼犺緭鎬ц兘涓嬮檷鐨勬儏鍐點傛嫢濉炵殑涓縐嶆瀬绔儏鍐墊槸姝婚攣錛圖eadlock錛夛紝閫鍑烘閿佸線寰闇瑕佺綉緇滃浣嶆搷浣溿?<br>嫻侀噺鎺у埗錛團low Control錛夋寚鐨勬槸鍦ㄤ竴鏉¢氶亾涓婃帶鍒跺彂閫佺鍙戦佹暟鎹殑鏁伴噺鍙婇熷害浣垮叾涓嶈秴榪囨帴鏀剁鎵鑳芥壙鍙楃殑鑳藉姏錛岃繖涓兘鍔涗富瑕佹寚鎺ユ敹绔帴鏀舵暟鎹殑閫熺巼鍙婃帴鏀舵暟鎹紦鍐插尯鐨勫ぇ灝忋傞氬父閲囩敤鍋滅瓑娉曟垨婊戝姩紿楀彛娉曟帶鍒舵祦閲忋?<br>嫻侀噺鎺у埗鏄拡瀵圭緋葷粺涓祫婧愬彈闄愯岃緗殑錛涙嫢濉炴帶鍒舵槸閽堝涓棿鑺傜偣璧勬簮鍙楅檺鑰岃緗殑銆?br><img src ="http://m.shnenglu.com/beautykingdom/aggbug/104460.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://m.shnenglu.com/beautykingdom/" target="_blank">chatler</a> 2009-12-30 16:54 <a href="http://m.shnenglu.com/beautykingdom/archive/2009/12/30/104460.html#Feedback" target="_blank" style="text-decoration:none;">鍙戣〃璇勮</a></div>]]></description></item><item><title>鐢╳get涓嬭澆鏂囦歡鎴栫洰褰曟垨鑰呮槸鏁翠釜緗戠珯http://m.shnenglu.com/beautykingdom/archive/2009/12/22/103663.htmlchatlerchatlerMon, 21 Dec 2009 17:04:00 GMThttp://m.shnenglu.com/beautykingdom/archive/2009/12/22/103663.htmlhttp://m.shnenglu.com/beautykingdom/comments/103663.htmlhttp://m.shnenglu.com/beautykingdom/archive/2009/12/22/103663.html#Feedback0http://m.shnenglu.com/beautykingdom/comments/commentRss/103663.htmlhttp://m.shnenglu.com/beautykingdom/services/trackbacks/103663.html鍏蜂綋鍙傛暟鐨勫惈涔夎繕娌℃湁man錛岀瓑man榪囦箣鍚庡啀娣誨姞榪涙潵鍝?br>


chatler 2009-12-22 01:04 鍙戣〃璇勮
]]>
http璇鋒眰鐨勮緇嗚繃紼?--鐞嗚В璁$畻鏈虹綉緇?lt;杞?gt;http://m.shnenglu.com/beautykingdom/archive/2009/10/21/99142.htmlchatlerchatlerWed, 21 Oct 2009 15:05:00 GMThttp://m.shnenglu.com/beautykingdom/archive/2009/10/21/99142.htmlhttp://m.shnenglu.com/beautykingdom/comments/99142.htmlhttp://m.shnenglu.com/beautykingdom/archive/2009/10/21/99142.html#Feedback0http://m.shnenglu.com/beautykingdom/comments/commentRss/99142.htmlhttp://m.shnenglu.com/beautykingdom/services/trackbacks/99142.html 

涓涓猦ttp璇鋒眰鐨勮緇嗚繃紼?/font>

鎴戜滑鏉ョ湅褰撴垜浠湪嫻忚鍣ㄨ緭鍏?/font>http://www.mycompany.com:8080/mydir/index.html,騫曞悗鎵鍙戠敓鐨勪竴鍒囥?/font>

棣栧厛http鏄竴涓簲鐢ㄥ眰鐨勫崗璁紝鍦ㄨ繖涓眰鐨勫崗璁紝鍙槸涓縐嶉氳瑙勮寖錛屼篃灝辨槸鍥犱負鍙屾柟瑕佽繘琛岄氳錛屽ぇ瀹惰浜嬪厛綰﹀畾涓涓鑼冦?/font>

1.榪炴帴 褰撴垜浠緭鍏ヨ繖鏍蜂竴涓姹傛椂錛岄鍏堣寤虹珛涓涓猻ocket榪炴帴錛屽洜涓簊ocket鏄氳繃ip鍜岀鍙e緩绔嬬殑錛屾墍浠ヤ箣鍓嶈繕鏈変竴涓狣NS瑙f瀽榪囩▼錛屾妸www.mycompany.com鍙樻垚ip錛屽鏋渦rl閲屼笉鍖呭惈绔彛鍙鳳紝鍒欎細浣跨敤璇ュ崗璁殑榛樿绔彛鍙楓?/font>

DNS鐨勮繃紼嬫槸榪欐牱鐨勶細棣栧厛鎴戜滑鐭ラ亾鎴戜滑鏈湴鐨勬満鍣ㄤ笂鍦ㄩ厤緗綉緇滄椂閮戒細濉啓DNS錛岃繖鏍鋒湰鏈哄氨浼氭妸榪欎釜url鍙戠粰榪欎釜閰嶇疆鐨凞NS鏈嶅姟鍣紝濡傛灉鑳藉鎵懼埌鐩稿簲鐨剈rl鍒欒繑鍥炲叾ip錛屽惁鍒欒DNS灝嗙戶緇皢璇ヨВ鏋愯姹傚彂閫佺粰涓婄駭DNS錛屾暣涓狣NS鍙互鐪嬪仛鏄竴涓爲(wèi)鐘剁粨鏋勶紝璇ヨ姹傚皢涓鐩村彂閫佸埌鏍圭洿鍒板緱鍒扮粨鏋溿傜幇鍦ㄥ凡緇忔嫢鏈変簡鐩爣ip鍜岀鍙e彿錛岃繖鏍鋒垜浠氨鍙互鎵撳紑socket榪炴帴浜嗐?/font>

2.璇鋒眰 榪炴帴鎴愬姛寤虹珛鍚庯紝寮濮嬪悜web鏈嶅姟鍣ㄥ彂閫佽姹傦紝榪欎釜璇鋒眰涓鑸槸GET鎴朠OST鍛戒護錛圥OST鐢ㄤ簬FORM鍙傛暟鐨勪紶閫掞級銆侴ET鍛戒護鐨勬牸寮忎負錛氥銆GET 璺緞/鏂囦歡鍚?HTTP/1.0
鏂囦歡鍚嶆寚鍑烘墍璁塊棶鐨勬枃浠訛紝HTTP/1.0鎸囧嚭Web嫻忚鍣ㄤ嬌鐢ㄧ殑HTTP鐗堟湰銆傜幇鍦ㄥ彲浠ュ彂閫丟ET鍛戒護錛?/font>

GET /mydir/index.html HTTP/1.0錛?/font>

3.搴旂瓟 web鏈嶅姟鍣ㄦ敹鍒拌繖涓姹傦紝榪涜澶勭悊銆備粠瀹冪殑鏂囨。絀洪棿涓悳绱㈠瓙鐩綍mydir鐨勬枃浠秈ndex.html銆傚鏋滄壘鍒拌鏂囦歡錛學(xué)eb鏈嶅姟鍣ㄦ妸璇ユ枃浠跺唴瀹逛紶閫佺粰鐩稿簲鐨刉eb嫻忚鍣ㄣ?/font>

涓轟簡鍛婄煡嫻忚鍣紝錛學(xué)eb鏈嶅姟鍣ㄩ鍏堜紶閫佷竴浜汬TTP澶翠俊鎭紝鐒跺悗浼犻佸叿浣撳唴瀹癸紙鍗矵TTP浣撲俊鎭級錛孒TTP澶翠俊鎭拰HTTP浣撲俊鎭箣闂寸敤涓涓┖琛屽垎寮銆?br>甯哥敤鐨凥TTP澶翠俊鎭湁錛?br>銆銆鈶?HTTP 1.0 200 OK 銆榪欐槸Web鏈嶅姟鍣ㄥ簲絳旂殑絎竴琛岋紝鍒楀嚭鏈嶅姟鍣ㄦ鍦ㄨ繍琛岀殑HTTP鐗堟湰鍙峰拰搴旂瓟浠g爜銆備唬鐮?200 OK"琛ㄧず璇鋒眰瀹屾垚銆?br>銆銆鈶?MIME_Version:1.0銆瀹冩寚紺篗IME綾誨瀷鐨勭増鏈?br>銆銆鈶?content_type:綾誨瀷銆榪欎釜澶翠俊鎭潪甯擱噸瑕侊紝瀹冩寚紺篐TTP浣撲俊鎭殑MIME綾誨瀷銆傚錛歝ontent_type:text/html鎸囩ず浼犻佺殑鏁版嵁鏄疕TML鏂囨。銆?br>銆銆鈶?content_length:闀垮害鍊箋瀹冩寚紺篐TTP浣撲俊鎭殑闀垮害錛堝瓧鑺傦級銆?/font>


4.鍏抽棴榪炴帴錛氬綋搴旂瓟緇撴潫鍚庯紝W(xué)eb嫻忚鍣ㄤ笌Web鏈嶅姟鍣ㄥ繀欏繪柇寮錛屼互淇濊瘉鍏跺畠Web嫻忚鍣ㄨ兘澶熶笌Web鏈嶅姟鍣ㄥ緩绔嬭繛鎺ャ?/font>


涓嬮潰鎴戜滑鍏蜂綋鍒嗘瀽鍏朵腑鐨勬暟鎹寘鍦ㄧ綉緇滀腑婕父鐨勭粡鍘?/font>

鍦ㄧ綉緇滃垎灞傜粨鏋勪腑錛屽悇灞備箣闂存槸涓ユ牸鍗曞悜渚濊禆鐨勩?#8220;鏈嶅姟”鏄弿榪板悇灞備箣闂村叧緋葷殑鎶借薄姒傚康錛屽嵆緗戠粶涓悇灞傚悜绱ч偦涓婂眰鎻愪緵鐨勪竴緇勬搷浣溿備笅灞傛槸鏈嶅姟鎻愪緵鑰咃紝涓婂眰鏄姹傛湇鍔$殑鐢ㄦ埛銆傛湇鍔$殑琛ㄧ幇褰㈠紡鏄師璇紙primitive錛夛紝濡傜郴緇熻皟鐢ㄦ垨搴撳嚱鏁般傜郴緇熻皟鐢ㄦ槸鎿嶄綔緋葷粺鍐呮牳鍚戠綉緇滃簲鐢ㄧ▼搴忔垨楂樺眰鍗忚鎻愪緵鐨勬湇鍔″師璇傜綉緇滀腑鐨刵灞傛昏鍚憂+1灞傛彁渚涙瘮n-1灞傛洿瀹屽鐨勬湇鍔★紝鍚﹀垯n灞傚氨娌℃湁瀛樺湪鐨勪環(huán)鍊箋?

浼犺緭灞傚疄鐜扮殑鏄?#8220;绔埌绔?#8221;閫氫俊錛屽紩榪涚綉闂磋繘紼嬮氫俊姒傚康錛屽悓鏃朵篃瑕佽В鍐沖樊閿欐帶鍒訛紝嫻侀噺鎺у埗錛屾暟鎹帓搴忥紙鎶ユ枃鎺掑簭錛夛紝榪炴帴綆$悊絳夐棶棰橈紝涓烘鎻愪緵涓嶅悓鐨勬湇鍔℃柟寮忋傞氬父浼犺緭灞傜殑鏈嶅姟閫氳繃緋葷粺璋冪敤鐨勬柟寮忔彁渚涳紝浠ocket鐨勬柟寮忋傚浜庡鎴風(fēng)錛岃鎯沖緩绔嬩竴涓猻ocket榪炴帴錛岄渶瑕佽皟鐢ㄨ繖鏍蜂竴浜涘嚱鏁皊ocket() bind() connect(),鐒跺悗灝卞彲浠ラ氳繃send()榪涜鏁版嵁鍙戦併?/font>

鐜板湪鐪嬫暟鎹寘鍦ㄧ綉緇滀腑鐨勭┛琛岃繃紼嬶細

搴旂敤灞?/font>

棣栧厛鎴戜滑鍙互鐪嬪埌鍦ㄥ簲鐢ㄥ眰錛屾牴鎹綋鍓嶇殑闇姹傚拰鍔ㄤ綔錛岀粨鍚堝簲鐢ㄥ眰鐨勫崗璁紝鏈夋垜浠‘瀹氬彂閫佺殑鏁版嵁鍐呭錛屾垜浠妸榪欎簺鏁版嵁鏀懼埌涓涓紦鍐插尯鍐咃紝鐒跺悗褰㈡垚浜嗗簲鐢ㄥ眰鐨勬姤鏂?strong>data銆?/font>

浼犺緭灞?/font>

榪欎簺鏁版嵁閫氳繃浼犺緭灞傚彂閫侊紝姣斿tcp鍗忚銆傛墍浠ュ畠浠細琚佸埌浼犺緭灞傚鐞嗭紝鍦ㄨ繖閲屾姤鏂囨墦涓婁簡浼犺緭澶寸殑鍖呭ご錛屼富瑕佸寘鍚鍙e彿錛屼互鍙妕cp鐨勫悇縐嶅埗淇℃伅錛岃繖浜涗俊鎭槸鐩存帴寰楀埌鐨勶紝鍥犱負鎺ュ彛涓渶瑕佹寚瀹氱鍙c傝繖鏍峰氨緇勬垚浜唗cp鐨勬暟鎹紶閫佸崟浣?strong>segment銆倀cp鏄竴縐嶇鍒扮鐨勫崗璁紝鍒╃敤榪欎簺淇℃伅錛屾瘮濡倀cp棣栭儴涓殑搴忓彿紜搴忓彿錛屾牴鎹繖浜涙暟瀛楋紝鍙戦佺殑涓鏂逛笉鏂殑榪涜鍙戦佺瓑寰呯‘璁わ紝鍙戦佷竴涓暟鎹鍚庯紝浼氬紑鍚竴涓鏁板櫒錛屽彧鏈夊綋鏀跺埌紜鍚庢墠浼氬彂閫佷笅涓涓紝濡傛灉瓚呰繃璁℃暟鏃墮棿浠嶆湭鏀跺埌紜鍒欒繘琛岄噸鍙戯紝鍦ㄦ帴鍙楃濡傛灉鏀跺埌閿欒鏁版嵁錛屽垯灝嗗叾涓㈠純錛岃繖灝嗗鑷村彂閫佺瓚呮椂閲嶅彂銆傞氳繃tcp鍗忚錛屾帶鍒朵簡鏁版嵁鍖呯殑鍙戦佸簭鍒楃殑浜х敓錛屼笉鏂殑璋冩暣鍙戦佸簭鍒楋紝瀹炵幇嫻佹帶鍜屾暟鎹畬鏁淬?/font>

緗戠粶灞?/font>

鐒跺悗寰呭彂閫佺殑鏁版嵁孌甸佸埌緗戠粶灞傦紝鍦ㄧ綉緇滃眰琚墦鍖咃紝榪欐牱灝佽涓婁簡緗戠粶灞傜殑鍖呭ご錛屽寘澶村唴閮ㄥ惈鏈夋簮鍙婄洰鐨勭殑ip鍦板潃錛岃灞傛暟鎹彂閫佸崟浣嶈縐頒負packet銆傜綉緇滃眰寮濮嬭礋璐e皢榪欐牱鐨勬暟鎹寘鍦ㄧ綉緇滀笂浼犺緭錛屽浣曠┛榪囪礬鐢卞櫒錛屾渶緇堝埌杈劇洰鐨勫湴鍧銆傚湪榪欓噷錛屾牴鎹洰鐨刬p鍦板潃錛屽氨闇瑕佹煡鎵句笅涓璺寵礬鐢辯殑鍦板潃銆傞鍏堝湪鏈満錛岃鏌ユ壘鏈満鐨勮礬鐢辮〃錛屽湪windows涓婅繍琛宺oute print灝卞彲浠ョ湅鍒板綋鍓嶈礬鐢辮〃鍐呭錛屾湁濡備笅鍑犻」錛?br>Active Routes Default Route Persistent Route.

鏁翠釜鏌ユ壘榪囩▼鏄繖鏍風(fēng)殑:
(1)鏍規(guī)嵁鐩殑鍦板潃錛屽緱鍒扮洰鐨勭綉緇滃彿錛屽鏋滃鍦ㄥ悓涓涓唴緗戯紝鍒欏彲浠ョ洿鎺ュ彂閫併?br>(2)濡傛灉涓嶆槸錛屽垯鏌ヨ璺敱琛紝鎵懼埌涓涓礬鐢便?br>(3)濡傛灉鎵句笉鍒版槑紜殑璺敱錛屾鏃跺湪璺敱琛ㄤ腑榪樹細鏈夐粯璁ょ綉鍏籌紝涔熷彲縐頒負緙虹渷緗戝叧錛孖P鐢ㄧ己鐪佺殑緗戝叧鍦板潃灝嗕竴涓暟鎹紶閫佺粰涓嬩竴涓寚瀹氱殑璺敱鍣紝鎵浠ョ綉鍏充篃鍙兘鏄礬鐢卞櫒錛屼篃鍙兘鍙槸鍐呯綉鍚戠壒瀹氳礬鐢卞櫒浼犺緭鏁版嵁鐨勭綉鍏熾?br>(4)璺敱鍣ㄦ敹鍒版暟鎹悗錛屽畠鍐嶆涓鴻繙紼嬩富鏈烘垨緗戠粶鏌ヨ璺敱錛岃嫢榪樻湭鎵懼埌璺敱錛岃鏁版嵁鍖呭皢鍙戦佸埌璇ヨ礬鐢卞櫒鐨勭己鐪佺綉鍏沖湴鍧銆傝屾暟鎹寘涓寘鍚竴涓渶澶ц礬鐢辮煩鏁幫紝濡傛灉瓚呰繃榪欎釜璺蟲暟錛屽氨浼氫涪寮冩暟鎹寘錛岃繖鏍峰彲浠ラ槻姝㈡棤闄愪紶閫掋傝礬鐢卞櫒鏀跺埌鏁版嵁鍖呭悗錛屽彧浼氭煡鐪嬬綉緇滃眰鐨勫寘瑁規(guī)暟鎹紝鐩殑ip銆傛墍浠ヨ瀹冩槸宸ヤ綔鍦ㄧ綉緇滃眰錛屼紶杈撳眰鐨勬暟鎹瀹冩潵璇村垯鏄忔槑鐨勩?/font>

濡傛灉涓婇潰榪欎簺姝ラ閮芥病鏈夋垚鍔燂紝閭d箞璇ユ暟鎹姤灝變笉鑳借浼犻併傚鏋滀笉鑳戒紶閫佺殑鏁版嵁鎶ユ潵鑷湰鏈猴紝閭d箞涓鑸細鍚戠敓鎴愭暟鎹姤鐨勫簲鐢ㄧ▼搴忚繑鍥炰竴涓?#8220;涓繪満涓嶅彲杈?#8221;鎴?“緗戠粶涓嶅彲杈?#8221;鐨勯敊璇?/font>

 

浠indows涓嬩富鏈虹殑璺敱琛ㄤ負渚嬶紝鐪嬭礬鐢辯殑鏌ユ壘榪囩▼
======================================================================
Active Routes:
Network Destination            Netmask                      Gateway              Interface                  Metric
0.0.0.0                                 0.0.0.0                       192.168.1.2           192.168.1.101           10
127.0.0.0                             255.0.0.0                   127.0.0.1               127.0.0.1                   1
192.168.1.0                         255.255.255.0           192.168.1.101       192.168.1.101           10
192.168.1.101                     255.255.255.255       127.0.0.1               127.0.0.1                   10
192.168.1.255                     255.255.255.255       192.168.1.101       192.168.1.101           10
 224.0.0.0                            240.0.0.0                   192.168.1.101       192.168.1.101           10
255.255.255.255                 255.255.255.255       192.168.1.101       192.168.1.101           1
Default Gateway:                192.168.1.2

Network Destination 鐩殑緗戞 
Netmask 瀛愮綉鎺╃爜 
Gateway 涓嬩竴璺寵礬鐢卞櫒鍏ュ彛鐨刬p錛岃礬鐢卞櫒閫氳繃interface鍜実ateway瀹氫箟涓璋冨埌涓嬩竴涓礬鐢卞櫒鐨勯摼璺紝閫氬父鎯呭喌涓嬶紝interface鍜実ateway鏄悓涓緗戞鐨勩?br>Interface 鍒拌揪璇ョ洰鐨勫湴鐨勬湰璺敱鍣ㄧ殑鍑哄彛ip錛堝浜庢垜浠殑涓漢pc鏉ヨ錛岄氬父鐢辨満綆楁満A鐨勭綉鍗★紝鐢ㄨ緗戝崱鐨処P鍦板潃鏍囪瘑錛屽綋鐒朵竴涓猵c涔熷彲浠ユ湁澶氫釜緗戝崱錛夈?/font>

緗戝叧榪欎釜姒傚康錛屼富瑕佺敤浜庝笉鍚屽瓙緗戦棿鐨勪氦浜掞紝褰撲袱涓瓙緗戝唴涓繪満A,B瑕佽繘琛岄氳鏃訛紝棣栧厛A瑕佸皢鏁版嵁鍙戦佸埌瀹冪殑鏈湴緗戝叧錛岀劧鍚庣綉鍏沖啀灝嗘暟鎹彂閫佺粰B鎵鍦ㄧ殑緗戝叧錛岀劧鍚庣綉鍏沖啀鍙戦佺粰B銆?br>榛樿緗戝叧錛屽綋涓涓暟鎹寘鐨勭洰鐨勭綉孌典笉鍦ㄤ綘鐨勮礬鐢辮褰曚腑錛岄偅涔堬紝浣犵殑璺敱鍣ㄨ鎶婇偅涓暟鎹寘鍙戦佸埌鍝噷錛佺己鐪佽礬鐢辯殑緗戝叧鏄敱浣犵殑榪炴帴涓婄殑default gateway鍐沖畾鐨勶紝涔熷氨鏄垜浠氬父鍦ㄧ綉緇滆繛鎺ラ噷閰嶇疆鐨勯偅涓箋?/font>

閫氬父interface鍜実ateway澶勫湪涓涓瓙緗戝唴錛屽浜庤礬鐢卞櫒鏉ヨ錛屽洜涓哄彲鑳藉叿鏈変笉鍚岀殑interface,褰撴暟鎹寘鍒拌揪鏃訛紝鏍規(guī)嵁Network Destination瀵繪壘鍖歸厤鐨勬潯鐩紝濡傛灉鎵懼埌錛宨nterface鍒欐寚鏄庝簡搴斿綋浠庤璺敱鍣ㄧ殑閭d釜鎺ュ彛鍑哄幓錛実ateway鍒欎唬琛ㄤ簡閭d釜瀛愮綉鐨勭綉鍏沖湴鍧銆?/font>

絎竴鏉?nbsp;     0.0.0.0   0.0.0.0   192.168.1.2    192.168.1.101   10
0.0.0.0浠h〃浜嗙己鐪佽礬鐢便傝璺敱璁板綍鐨勬剰鎬濇槸錛氬綋鎴戞帴鏀跺埌涓涓暟鎹寘鐨勭洰鐨勭綉孌典笉鍦ㄦ垜鐨勮礬鐢辮褰曚腑錛屾垜浼氬皢璇ユ暟鎹寘閫氳繃192.168.1.101榪欎釜鎺ュ彛鍙戦佸埌192.168.1.2榪欎釜鍦板潃錛岃繖涓湴鍧鏄笅涓涓礬鐢卞櫒鐨勪竴涓帴鍙o紝榪欐牱榪欎釜鏁版嵁鍖呭氨鍙互浜や粯緇欎笅涓涓礬鐢卞櫒澶勭悊錛屼笌鎴戞棤鍏熾傝璺敱璁板綍鐨勭嚎璺川閲?10銆傚綋鏈夊涓潯鐩尮閰嶆椂錛屼細閫夋嫨鍏鋒湁杈冨皬Metric鍊肩殑閭d釜銆?/font>

絎笁鏉?nbsp;     192.168.1.0   255.255.255.0  192.168.1.101   192.168.1.101  10
鐩磋仈緗戞鐨勮礬鐢辮褰曪細褰撹礬鐢卞櫒鏀跺埌鍙戝線鐩磋仈緗戞鐨勬暟鎹寘鏃惰濡備綍澶勭悊錛岃繖縐嶆儏鍐碉紝璺敱璁板綍鐨刬nterface鍜実ateway鏄悓涓涓傚綋鎴戞帴鏀跺埌涓涓暟鎹寘鐨勭洰鐨勭綉孌墊槸192.168.1.0鏃訛紝鎴戜細灝嗚鏁版嵁鍖呴氳繃192.168.1.101榪欎釜鎺ュ彛鐩存帴鍙戦佸嚭鍘伙紝鍥犱負榪欎釜绔彛鐩存帴榪炴帴鐫192.168.1.0榪欎釜緗戞錛岃璺敱璁板綍鐨勭嚎璺川閲?10 錛堝洜interface鍜実ateway鏄悓涓涓紝琛ㄧず鏁版嵁鍖呯洿鎺ヤ紶閫佺粰鐩殑鍦板潃錛屼笉闇瑕佸啀杞粰璺敱鍣級銆?/font>

涓鑸氨鍒嗚繖涓ょ鎯呭喌錛岀洰鐨勫湴鍧涓庡綋鍓嶈礬鐢卞櫒鎺ュ彛鏄惁鍦ㄥ悓涓瀛愮綉銆傚鏋滄槸鍒欑洿鎺ュ彂閫侊紝涓嶉渶鍐嶈漿緇欒礬鐢卞櫒錛屽惁鍒欒繕闇瑕佽漿鍙戠粰涓嬩竴涓礬鐢卞櫒緇х畫榪涜澶勭悊銆?/font>

 

鏌ユ壘鍒頒笅涓璺砳p鍦板潃鍚庯紝榪橀渶瑕佺煡閬撳畠鐨刴ac鍦板潃錛岃繖涓湴鍧瑕佷綔涓洪摼璺眰鏁版嵁瑁呰繘閾捐礬灞傚ご閮ㄣ傝繖鏃墮渶瑕乤rp鍗忚錛屽叿浣撹繃紼嬫槸榪欐牱鐨勶紝鏌ユ壘arp緙撳啿錛寃indows涓嬭繍琛宎rp -a鍙互鏌ョ湅褰撳墠arp緙撳啿鍐呭銆傚鏋滈噷闈㈠惈鏈夊搴攊p鐨刴ac鍦板潃錛屽垯鐩存帴榪斿洖銆傚惁鍒欓渶瑕佸彂鐢焌rp璇鋒眰錛岃璇鋒眰鍖呭惈婧愮殑ip鍜宮ac鍦板潃錛岃繕鏈夌洰鐨勫湴鐨刬p鍦板潃錛屽湪緗戝唴榪涜騫挎挱錛屾墍鏈夌殑涓繪満浼氭鏌ヨ嚜宸辯殑ip涓庤璇鋒眰涓殑鐩殑ip鏄惁涓鏍鳳紝濡傛灉鍒氬ソ瀵瑰簲鍒欒繑鍥炶嚜宸辯殑mac鍦板潃錛屽悓鏃跺皢璇鋒眰鑰呯殑ip mac淇濆瓨銆傝繖鏍峰氨寰楀埌浜嗙洰鏍噄p鐨刴ac鍦板潃銆?/font>

閾捐礬灞?/font>

灝唌ac鍦板潃鍙婇摼璺眰鎺у埗淇℃伅鍔犲埌鏁版嵁鍖呴噷錛屽艦鎴?strong>Frame錛孎rame鍦ㄩ摼璺眰鍗忚涓嬶紝瀹屾垚浜嗙浉閭?cè)潥勮妭鐐归棿鐨勬暟鎹紶杈撳Q屽畬鎴愯繛鎺ュ緩绔嬶紝鎺у埗浼犺緭閫熷害錛屾暟鎹畬鏁淬?/font>

鐗╃悊灞?/font>

鐗╃悊綰胯礬鍒欏彧璐熻矗璇ユ暟鎹互bit涓哄崟浣嶄粠涓繪満浼犺緭鍒頒笅涓涓洰鐨勫湴銆?/font>

涓嬩竴涓洰鐨勫湴鎺ュ彈鍒版暟鎹悗錛屼粠鐗╃悊灞傚緱鍒版暟鎹劧鍚庣粡榪囬愬眰鐨勮В鍖?鍒?閾捐礬灞?鍒?緗戠粶灞傦紝鐒跺悗寮濮嬩笂榪扮殑澶勭悊錛屽湪緇忕綉緇滃眰 閾捐礬灞?鐗╃悊灞傚皢鏁版嵁灝佽濂界戶緇紶寰涓嬩竴涓湴鍧銆?/font>

鍦ㄤ笂闈㈢殑榪囩▼涓紝鍙互鐪嬪埌鏈変竴涓礬鐢辮〃鏌ヨ榪囩▼錛岃岃繖涓礬鐢辮〃鐨勫緩绔嬪垯渚濊禆浜庤礬鐢辯畻娉曘備篃灝辨槸璇磋礬鐢辯畻娉曞疄闄呬笂鍙槸鐢ㄦ潵璺敱鍣ㄤ箣闂存洿鏂扮淮鎶よ礬鐢辮〃錛岀湡姝g殑鏁版嵁浼犺緭榪囩▼騫朵笉鎵ц榪欎釜綆楁硶錛屽彧鏌ョ湅璺敱琛ㄣ傝繖涓蹇典篃寰堥噸瑕侊紝闇瑕佺悊瑙e父鐢ㄧ殑璺敱綆楁硶銆傝屾暣涓猼cp鍗忚姣旇緝澶嶆潅錛岃窡閾捐礬灞傜殑鍗忚鏈変簺鐩鎬技錛屽叾涓湁寰堥噸瑕佺殑涓浜涙満鍒舵垨鑰呮蹇甸渶瑕佽鐪熺悊瑙o紝姣斿緙栧彿涓庣‘璁わ紝嫻侀噺鎺у埗錛岄噸鍙戞満鍒訛紝鍙戦佹帴鍙楃獥鍙c?/font>

 

tcp/ip鍩烘湰妯″瀷鍙婃蹇?/font>


鐗╃悊灞?/font>

璁懼錛屼腑緇у櫒錛坮epeater錛?闆嗙嚎鍣紙hub錛夈傚浜庤繖涓灞傛潵璇達紝浠庝竴涓鍙f敹鍒版暟鎹紝浼氳漿鍙戝埌鎵鏈夌鍙c?/font>


閾捐礬灞?/font>

鍗忚錛歋DLC錛圫ynchronous Data Link Control錛塇DLC錛圚igh-level Data Link Control錛?ppp鍗忚鐙珛鐨勯摼璺澶囦腑鏈甯歌鐨勫綋灞炵綉鍗★紝緗戞ˉ涔熸槸閾捐礬浜у搧銆傞泦綰垮櫒MODEM鐨勬煇浜涘姛鑳芥湁浜鴻涓哄睘浜庨摼璺眰錛屽姝よ繕鏈変簺浜夎璁や負灞炰簬鐗╃悊灞傝澶囥傞櫎姝や箣澶栵紝鎵鏈夌殑浜ゆ崲鏈洪兘闇瑕佸伐浣滃湪鏁版嵁閾捐礬灞傦紝浣嗕粎宸ヤ綔鍦ㄦ暟鎹摼璺眰鐨勪粎鏄簩灞備氦鎹㈡満銆傚叾浠栧儚涓夊眰浜ゆ崲鏈恒佸洓灞備氦鎹㈡満鍜屼竷灞備氦鎹㈡満铏界劧鍙搴斿伐浣滃湪OSI鐨勪笁灞傘佸洓灞傚拰涓冨眰錛屼絾浜屽眰鍔熻兘浠嶆槸瀹冧滑鍩烘湰鐨勫姛鑳姐?/font>

鍥犱負鏈変簡MAC鍦板潃琛紝鎵浠ユ墠鍏呭垎閬垮厤浜嗗啿紿侊紝鍥犱負浜ゆ崲鏈洪氳繃鐩殑MAC鍦板潃鐭ラ亾搴旇鎶婅繖涓暟鎹漿鍙戝埌鍝釜绔彛銆傝屼笉浼氬儚HUB涓鏍鳳紝浼氳漿鍙戝埌鎵鏈夋淮绔彛銆傛墍浠ワ紝浜ゆ崲鏈烘槸鍙互鍒掑垎鍐茬獊鍩熸淮銆?/font>


緗戠粶灞?/font>

鍥涗釜涓昏鐨勫崗璁?  
緗戦檯鍗忚IP錛氳礋璐e湪涓繪満鍜岀綉緇滀箣闂村鍧鍜岃礬鐢辨暟鎹寘銆?nbsp;   
鍦板潃瑙f瀽鍗忚ARP錛氳幏寰楀悓涓鐗╃悊緗戠粶涓殑紜歡涓繪満鍦板潃銆?nbsp;   
緗戦檯鎺у埗娑堟伅鍗忚ICMP錛氬彂閫佹秷鎭紝騫舵姤鍛婃湁鍏蟲暟鎹寘鐨勪紶閫侀敊璇?nbsp;   
浜掕仈緇勭鐞嗗崗璁甀GMP錛氳IP涓繪満鎷挎潵鍚戞湰鍦板璺箍鎾礬鐢卞櫒鎶ュ憡涓繪満緇勬垚鍛樸?/font>

璇ュ眰璁懼鏈変笁灞備氦鎹㈡満錛岃礬鐢卞櫒銆?/font>


浼犺緭灞?/font>

涓や釜閲嶈鍗忚 TCP 鍜?UDP 銆?/font>

绔彛姒傚康錛歍CP/UDP 浣跨敤 IP 鍦板潃鏍囪瘑緗戜笂涓繪満錛屼嬌鐢ㄧ鍙e彿鏉ユ爣璇嗗簲鐢ㄨ繘紼嬶紝鍗?TCP/UDP 鐢ㄤ富鏈?IP 鍦板潃鍜屼負搴旂敤榪涚▼鍒嗛厤鐨勭鍙e彿鏉ユ爣璇嗗簲鐢ㄨ繘紼嬨傜鍙e彿鏄?16 浣嶇殑鏃犵鍙鋒暣鏁幫紝 TCP 鐨勭鍙e彿鍜?UDP 鐨勭鍙e彿鏄袱涓嫭绔嬬殑搴忓垪銆傚敖綆$浉浜掔嫭绔嬶紝濡傛灉 TCP 鍜?UDP 鍚屾椂鎻愪緵鏌愮鐭ュ悕鏈嶅姟錛屼袱涓崗璁氬父閫夋嫨鐩稿悓鐨勭鍙e彿銆傝繖綰補鏄負浜嗕嬌鐢ㄦ柟渚匡紝鑰屼笉鏄崗璁湰韜殑瑕佹眰銆傚埄鐢ㄧ鍙e彿錛屼竴鍙頒富鏈轟笂澶氫釜榪涚▼鍙互鍚屾椂浣跨敤 TCP/UDP 鎻愪緵鐨勪紶杈撴湇鍔★紝騫朵笖榪欑閫氫俊鏄鍒扮鐨勶紝瀹冪殑鏁版嵁鐢?IP 浼犻掞紝浣嗕笌 IP 鏁版嵁鎶ョ殑浼犻掕礬寰勬棤鍏熾傜綉緇滈氫俊涓敤涓涓笁鍏冪粍鍙互鍦ㄥ叏灞鍞竴鏍囧織涓涓簲鐢ㄨ繘紼嬶細錛堝崗璁紝鏈湴鍦板潃錛屾湰鍦扮鍙e彿錛夈?/font>

涔熷氨鏄tcp鍜寀dp鍙互浣跨敤鐩稿悓鐨勭鍙c?/font>

鍙互鐪嬪埌閫氳繃(鍗忚,婧愮鍙o紝婧恑p錛岀洰鐨勭鍙o紝鐩殑ip)灝卞彲浠ョ敤鏉ュ畬鍏ㄦ爣璇嗕竴緇勭綉緇滆繛鎺ャ?/font>

搴旂敤灞?/font>

鍩轟簬tcp錛歍elnet FTP SMTP DNS HTTP
鍩轟簬udp錛歊IP NTP錛堢綉钀芥椂闂村崗璁級鍜孌NS 錛圖NS涔熶嬌鐢═CP錛塖NMP TFTP

 

鍙傝冩枃鐚細

璇繪噦鏈満璺敱琛?http://hi.baidu.com/thusness/blog/item/9c18e5bf33725f0818d81f52.html

Internet 浼犺緭灞傚崗璁?http://www.cic.tsinghua.edu.cn/jdx/book6/3.htm 璁$畻鏈虹綉緇?璋㈠笇浠?/font>


杞嚜錛?br>http://blog.chinaunix.net/u2/67780/showart_2065190.html

chatler 2009-10-21 23:05 鍙戣〃璇勮
]]>TCP涓夋鎻℃墜/鍥涙鎸ユ墜璇﹁В<杞?gt;http://m.shnenglu.com/beautykingdom/archive/2009/10/20/99062.htmlchatlerchatlerTue, 20 Oct 2009 13:15:00 GMThttp://m.shnenglu.com/beautykingdom/archive/2009/10/20/99062.htmlhttp://m.shnenglu.com/beautykingdom/comments/99062.htmlhttp://m.shnenglu.com/beautykingdom/archive/2009/10/20/99062.html#Feedback0http://m.shnenglu.com/beautykingdom/comments/commentRss/99062.htmlhttp://m.shnenglu.com/beautykingdom/services/trackbacks/99062.html1
銆佸緩绔嬭繛鎺ュ崗璁紙涓夋鎻℃墜錛?/font>
錛?錛夊鎴風(fēng)鍙戦佷竴涓甫SYN鏍囧織鐨凾CP鎶ユ枃鍒版湇鍔″櫒銆傝繖鏄笁嬈℃彙鎵嬭繃紼嬩腑鐨勬姤鏂?銆?br style="font: normal normal normal 12px/normal song, Verdana; ">錛?錛?鏈嶅姟鍣ㄧ鍥炲簲瀹㈡埛绔殑錛岃繖鏄笁嬈℃彙鎵嬩腑鐨勭2涓姤鏂囷紝榪欎釜鎶ユ枃鍚屾椂甯CK鏍囧織鍜孲YN鏍囧織銆傚洜姝ゅ畠琛ㄧず瀵瑰垰鎵嶅鎴風(fēng)SYN鎶ユ枃鐨勫洖搴旓紱鍚屾椂鍙堟爣蹇桽YN緇欏鎴風(fēng)錛岃闂鎴風(fēng)鏄惁鍑嗗濂借繘琛屾暟鎹氳銆?br style="font: normal normal normal 12px/normal song, Verdana; ">錛?錛?瀹㈡埛蹇呴』鍐嶆鍥炲簲鏈嶅姟孌典竴涓狝CK鎶ユ枃錛岃繖鏄姤鏂囨3銆?br style="font: normal normal normal 12px/normal song, Verdana; ">2
銆佽繛鎺ョ粓姝㈠崗璁紙鍥涙鎸ユ墜錛?/font>
銆 銆鐢變簬TCP榪炴帴鏄叏鍙屽伐鐨勶紝鍥犳姣忎釜鏂瑰悜閮藉繀欏誨崟鐙繘琛屽叧闂傝繖鍘熷垯鏄綋涓鏂瑰畬鎴愬畠鐨勬暟鎹彂閫佷換鍔″悗灝辮兘鍙戦佷竴涓狥IN鏉ョ粓姝㈣繖涓柟鍚戠殑榪炴帴銆傛敹鍒頒竴涓?FIN鍙剰鍛崇潃榪欎竴鏂瑰悜涓婃病鏈夋暟鎹祦鍔紝涓涓猅CP榪炴帴鍦ㄦ敹鍒頒竴涓狥IN鍚庝粛鑳藉彂閫佹暟鎹傞鍏堣繘琛屽叧闂殑涓鏂瑰皢鎵ц涓誨姩鍏抽棴錛岃屽彟涓鏂規(guī)墽琛岃鍔ㄥ叧闂?br style="font: normal normal normal 12px/normal song, Verdana; ">銆錛?錛?TCP瀹㈡埛绔彂閫佷竴涓狥IN錛岀敤鏉ュ叧闂鎴峰埌鏈嶅姟鍣ㄧ殑鏁版嵁浼犻侊紙鎶ユ枃孌?錛夈?br style="font: normal normal normal 12px/normal song, Verdana; ">銆錛?錛?鏈嶅姟鍣ㄦ敹鍒拌繖涓狥IN錛屽畠鍙戝洖涓涓狝CK錛岀‘璁ゅ簭鍙蜂負鏀跺埌鐨勫簭鍙峰姞1錛堟姤鏂囨5錛夈傚拰SYN涓鏍鳳紝涓涓狥IN灝嗗崰鐢ㄤ竴涓簭鍙楓?br style="font: normal normal normal 12px/normal song, Verdana; ">銆錛?錛?鏈嶅姟鍣ㄥ叧闂鎴風(fēng)鐨勮繛鎺ワ紝鍙戦佷竴涓狥IN緇欏鎴風(fēng)錛堟姤鏂囨6錛夈?br style="font: normal normal normal 12px/normal song, Verdana; ">銆錛?錛?瀹㈡埛孌靛彂鍥濧CK鎶ユ枃紜錛屽茍灝嗙‘璁ゅ簭鍙瘋緗負鏀跺埌搴忓彿鍔?錛堟姤鏂囨7錛夈?br style="font: normal normal normal 12px/normal song, Verdana; ">CLOSED: 榪欎釜娌′粈涔堝ソ璇寸殑浜嗭紝琛ㄧず鍒濆鐘舵併?br style="font: normal normal normal 12px/normal song, Verdana; ">LISTEN: 榪欎釜涔熸槸闈炲父瀹規(guī)槗鐞嗚В鐨勪竴涓姸鎬侊紝琛ㄧず鏈嶅姟鍣ㄧ鐨勬煇涓猄OCKET澶勪簬鐩戝惉鐘舵侊紝鍙互鎺ュ彈榪炴帴浜嗐?br style="font: normal normal normal 12px/normal song, Verdana; ">SYN_RCVD: 榪欎釜鐘舵佽〃紺烘帴鍙楀埌浜哠YN鎶ユ枃錛屽湪姝e父鎯呭喌涓嬶紝榪欎釜鐘舵佹槸鏈嶅姟鍣ㄧ鐨凷OCKET鍦ㄥ緩绔婽CP榪炴帴鏃剁殑涓夋鎻℃墜浼氳瘽榪囩▼涓殑涓涓腑闂寸姸鎬侊紝寰堢煭鏆傦紝鍩烘湰 涓婄敤netstat浣犳槸寰堥毦鐪嬪埌榪欑鐘舵佺殑錛岄櫎闈炰綘鐗規(guī)剰鍐欎簡涓涓鎴風(fēng)嫻嬭瘯紼嬪簭錛屾晠鎰忓皢涓夋TCP鎻℃墜榪囩▼涓渶鍚庝竴涓狝CK鎶ユ枃涓嶄簣鍙戦併傚洜姝よ繖縐嶇姸鎬?鏃訛紝褰撴敹鍒板鎴風(fēng)鐨凙CK鎶ユ枃鍚庯紝瀹冧細榪涘叆鍒癊STABLISHED鐘舵併?br style="font: normal normal normal 12px/normal song, Verdana; ">SYN_SENT: 榪欎釜鐘舵佷笌SYN_RCVD閬ユ兂鍛煎簲錛屽綋瀹㈡埛绔疭OCKET鎵цCONNECT榪炴帴鏃訛紝瀹冮鍏堝彂閫丼YN鎶ユ枃錛屽洜姝や篃闅忓嵆瀹冧細榪涘叆鍒頒簡SYN_SENT鐘?鎬侊紝騫剁瓑寰呮湇鍔$鐨勫彂閫佷笁嬈℃彙鎵嬩腑鐨勭2涓姤鏂囥係YN_SENT鐘舵佽〃紺哄鎴風(fēng)宸插彂閫丼YN鎶ユ枃銆?br style="font: normal normal normal 12px/normal song, Verdana; ">ESTABLISHED錛氳繖涓鏄撶悊瑙d簡錛岃〃紺鴻繛鎺ュ凡緇忓緩绔嬩簡銆?br style="font: normal normal normal 12px/normal song, Verdana; ">FIN_WAIT_1: 榪欎釜鐘舵佽濂藉ソ瑙i噴涓涓嬶紝鍏跺疄FIN_WAIT_1鍜孎IN_WAIT_2鐘舵佺殑鐪熸鍚箟閮芥槸琛ㄧず絳夊緟瀵規(guī)柟鐨凢IN鎶ユ枃銆傝岃繖涓ょ鐘舵佺殑鍖哄埆 鏄細FIN_WAIT_1鐘舵佸疄闄呬笂鏄綋SOCKET鍦‥STABLISHED鐘舵佹椂錛屽畠鎯充富鍔ㄥ叧闂繛鎺ワ紝鍚戝鏂瑰彂閫佷簡FIN鎶ユ枃錛屾鏃惰SOCKET鍗?榪涘叆鍒癋IN_WAIT_1鐘舵併傝屽綋瀵規(guī)柟鍥炲簲ACK鎶ユ枃鍚庯紝鍒欒繘鍏ュ埌FIN_WAIT_2鐘舵侊紝褰撶劧鍦ㄥ疄闄呯殑姝e父鎯呭喌涓嬶紝鏃犺瀵規(guī)柟浣曠鎯呭喌涓嬶紝閮藉簲璇ラ┈ 涓婂洖搴擜CK鎶ユ枃錛屾墍浠IN_WAIT_1鐘舵佷竴鑸槸姣旇緝闅捐鍒扮殑錛岃孎IN_WAIT_2鐘舵佽繕鏈夋椂甯稿父鍙互鐢╪etstat鐪嬪埌銆?br style="font: normal normal normal 12px/normal song, Verdana; ">FIN_WAIT_2錛氫笂闈㈠凡緇忚緇嗚В閲婁簡榪欑鐘舵侊紝瀹為檯涓奆IN_WAIT_2鐘舵佷笅鐨凷OCKET錛岃〃紺哄崐榪炴帴錛屼篃鍗蟲湁涓鏂硅姹俢lose榪炴帴錛屼絾鍙﹀榪樺憡璇夊鏂癸紝鎴戞殏鏃惰繕鏈夌偣鏁版嵁闇瑕佷紶閫佺粰浣狅紝紼嶅悗鍐嶅叧闂繛鎺ャ?br style="font: normal normal normal 12px/normal song, Verdana; ">TIME_WAIT: 琛ㄧず鏀跺埌浜嗗鏂圭殑FIN鎶ユ枃錛屽茍鍙戦佸嚭浜咥CK鎶ユ枃錛屽氨絳?MSL鍚庡嵆鍙洖鍒癈LOSED鍙敤鐘舵佷簡銆傚鏋淔IN_WAIT_1鐘舵佷笅錛屾敹鍒頒簡瀵規(guī)柟鍚屾椂甯?FIN鏍囧織鍜孉CK鏍囧織鐨勬姤鏂囨椂錛屽彲浠ョ洿鎺ヨ繘鍏ュ埌TIME_WAIT鐘舵侊紝鑰屾棤欏葷粡榪嘑IN_WAIT_2鐘舵併?br style="font: normal normal normal 12px/normal song, Verdana; ">CLOSING: 榪欑鐘舵佹瘮杈冪壒孌婏紝瀹為檯鎯呭喌涓簲璇ユ槸寰堝皯瑙侊紝灞炰簬涓縐嶆瘮杈冪綍瑙佺殑渚嬪鐘舵併傛甯告儏鍐典笅錛屽綋浣犲彂閫丗IN鎶ユ枃鍚庯紝鎸夌悊鏉ヨ鏄簲璇ュ厛鏀跺埌錛堟垨鍚屾椂鏀跺埌錛夊鏂圭殑 ACK鎶ユ枃錛屽啀鏀跺埌瀵規(guī)柟鐨凢IN鎶ユ枃銆備絾鏄疌LOSING鐘舵佽〃紺轟綘鍙戦丗IN鎶ユ枃鍚庯紝騫舵病鏈夋敹鍒板鏂圭殑ACK鎶ユ枃錛屽弽鑰屽嵈涔熸敹鍒頒簡瀵規(guī)柟鐨凢IN鎶ユ枃銆備粈 涔堟儏鍐典笅浼氬嚭鐜版縐嶆儏鍐靛憿錛熷叾瀹炵粏鎯充竴涓嬶紝涔熶笉闅懼緱鍑虹粨璁猴細閭e氨鏄鏋滃弻鏂瑰嚑涔庡湪鍚屾椂close涓涓猄OCKET鐨勮瘽錛岄偅涔堝氨鍑虹幇浜嗗弻鏂瑰悓鏃跺彂閫丗IN鎶?鏂囩殑鎯呭喌錛屼篃鍗充細鍑虹幇CLOSING鐘舵侊紝琛ㄧず鍙屾柟閮芥鍦ㄥ叧闂璖OCKET榪炴帴銆?br style="font: normal normal normal 12px/normal song, Verdana; ">CLOSE_WAIT: 榪欑鐘舵佺殑鍚箟鍏跺疄鏄〃紺哄湪絳夊緟鍏抽棴銆傛庝箞鐞嗚В鍛紵褰撳鏂筩lose涓涓猄OCKET鍚庡彂閫丗IN鎶ユ枃緇欒嚜宸憋紝浣犵郴緇熸鏃犵枒闂湴浼氬洖搴斾竴涓狝CK鎶ユ枃緇欏 鏂癸紝姝ゆ椂鍒欒繘鍏ュ埌CLOSE_WAIT鐘舵併傛帴涓嬫潵鍛紝瀹為檯涓婁綘鐪熸闇瑕佽冭檻鐨勪簨鎯呮槸瀵熺湅浣犳槸鍚﹁繕鏈夋暟鎹彂閫佺粰瀵規(guī)柟錛屽鏋滄病鏈夌殑璇濓紝閭d箞浣犱篃灝卞彲浠?close榪欎釜SOCKET錛屽彂閫丗IN鎶ユ枃緇欏鏂癸紝涔熷嵆鍏抽棴榪炴帴銆傛墍浠ヤ綘鍦–LOSE_WAIT鐘舵佷笅錛岄渶瑕佸畬鎴愮殑浜嬫儏鏄瓑寰呬綘鍘誨叧闂繛鎺ャ?br style="font: normal normal normal 12px/normal song, Verdana; ">LAST_ACK: 榪欎釜鐘舵佽繕鏄瘮杈冨鏄撳ソ鐞嗚В鐨勶紝瀹冩槸琚姩鍏抽棴涓鏂瑰湪鍙戦丗IN鎶ユ枃鍚庯紝鏈鍚庣瓑寰呭鏂圭殑ACK鎶ユ枃銆傚綋鏀跺埌ACK鎶ユ枃鍚庯紝涔熷嵆鍙互榪涘叆鍒癈LOSED鍙敤鐘舵佷簡銆?br style="font: normal normal normal 12px/normal song, Verdana; ">鏈鍚庢湁2涓棶棰樼殑鍥炵瓟錛屾垜鑷繁鍒嗘瀽鍚庣殑緇撹錛堜笉涓瀹氫繚璇?00%姝g‘錛?br style="font: normal normal normal 12px/normal song, Verdana; ">1銆?涓轟粈涔堝緩绔嬭繛鎺ュ崗璁槸涓夋鎻℃墜錛岃屽叧闂繛鎺ュ嵈鏄洓嬈℃彙鎵嬪憿錛?br style="font: normal normal normal 12px/normal song, Verdana; ">榪?鏄洜涓烘湇鍔$鐨凩ISTEN鐘舵佷笅鐨凷OCKET褰撴敹鍒癝YN鎶ユ枃鐨勫緩榪炶姹傚悗錛屽畠鍙互鎶夾CK鍜孲YN錛圓CK璧峰簲絳斾綔鐢紝鑰孲YN璧峰悓姝ヤ綔鐢級鏀懼湪涓 涓姤鏂囬噷鏉ュ彂閫併備絾鍏抽棴榪炴帴鏃訛紝褰撴敹鍒板鏂圭殑FIN鎶ユ枃閫氱煡鏃訛紝瀹冧粎浠呰〃紺哄鏂規(guī)病鏈夋暟鎹彂閫佺粰浣犱簡錛涗絾鏈繀浣犳墍鏈夌殑鏁版嵁閮藉叏閮ㄥ彂閫佺粰瀵規(guī)柟浜嗭紝鎵浠ヤ綘鍙互鏈?蹇呬細椹笂浼氬叧闂璖OCKET,涔熷嵆浣犲彲鑳借繕闇瑕佸彂閫佷竴浜涙暟鎹粰瀵規(guī)柟涔嬪悗錛屽啀鍙戦丗IN鎶ユ枃緇欏鏂規(guī)潵琛ㄧず浣犲悓鎰忕幇鍦ㄥ彲浠ュ叧闂繛鎺ヤ簡錛屾墍浠ュ畠榪欓噷鐨凙CK鎶ユ枃 鍜孎IN鎶ユ枃澶氭暟鎯呭喌涓嬮兘鏄垎寮鍙戦佺殑銆?br style="font: normal normal normal 12px/normal song, Verdana; ">2銆?涓轟粈涔圱IME_WAIT鐘舵佽繕闇瑕佺瓑2MSL鍚庢墠鑳借繑鍥炲埌CLOSED鐘舵侊紵
榪欐槸鍥犱負錛?铏界劧鍙屾柟閮藉悓鎰忓叧闂繛鎺ヤ簡錛岃屼笖鎻℃墜鐨?涓姤鏂囦篃閮藉崗璋冨拰鍙戦佸畬姣曪紝鎸夌悊鍙互鐩存帴鍥炲埌CLOSED鐘舵侊紙灝卞ソ姣斾粠SYN_SEND鐘舵佸埌 ESTABLISH鐘舵侀偅鏍鳳級錛涗絾鏄洜涓烘垜浠繀欏昏鍋囨兂緗戠粶鏄笉鍙潬鐨勶紝浣犳棤娉曚繚璇佷綘鏈鍚庡彂閫佺殑ACK鎶ユ枃浼氫竴瀹氳瀵規(guī)柟鏀跺埌錛屽洜姝ゅ鏂瑰浜?LAST_ACK鐘舵佷笅鐨凷OCKET鍙兘浼氬洜涓鴻秴鏃舵湭鏀跺埌ACK鎶ユ枃錛岃岄噸鍙慒IN鎶ユ枃錛屾墍浠ヨ繖涓猅IME_WAIT鐘舵佺殑浣滅敤灝辨槸鐢ㄦ潵閲嶅彂鍙兘涓㈠け鐨?ACK鎶ユ枃銆?/font>
杞嚜錛?/span>


chatler 2009-10-20 21:15 鍙戣〃璇勮
]]>
青青草原综合久久大伊人导航_色综合久久天天综合_日日噜噜夜夜狠狠久久丁香五月_热久久这里只有精品
  • <ins id="pjuwb"></ins>
    <blockquote id="pjuwb"><pre id="pjuwb"></pre></blockquote>
      <noscript id="pjuwb"></noscript>
            <sup id="pjuwb"><pre id="pjuwb"></pre></sup>
              <dd id="pjuwb"></dd>
              <abbr id="pjuwb"></abbr>
              亚洲精品国产精品国自产在线| 亚洲人www| 国产精品久久久对白| 男男成人高潮片免费网站| 久久全国免费视频| 免费在线播放第一区高清av| 欧美成人精品一区二区| 欧美激情一区二区久久久| 欧美日韩大陆在线| 国产精品永久| 又紧又大又爽精品一区二区| 99re6热在线精品视频播放速度| 国产人久久人人人人爽| 国产欧美一二三区| 亚洲成人直播| 亚洲精品婷婷| 亚洲免费一在线| 久久国产精品99精品国产| 久久久久久亚洲精品中文字幕 | 亚洲视频香蕉人妖| 欧美一区二区精美| 亚洲电影中文字幕| 亚洲免费在线观看| 久久久久久久一区二区| 欧美日一区二区三区在线观看国产免 | 亚洲小说欧美另类婷婷| 欧美在线视频免费播放| 欧美福利一区二区三区| 国产精品久久久久久久久久久久久久 | 99国内精品| 欧美在线视频不卡| 亚洲韩国一区二区三区| 欧美一区二区精美| 欧美日本一道本在线视频| 国产伦精品一区二区三| 亚洲高清不卡在线观看| 亚洲欧美日韩一区二区在线| 欧美成人精品在线观看| 亚洲一区不卡| 欧美另类69精品久久久久9999| 国产精品综合av一区二区国产馆| 亚洲欧洲日韩综合二区| 久久久噜噜噜久久人人看| 一区二区动漫| 欧美精品尤物在线| 狠狠色综合色综合网络| 午夜欧美理论片| 日韩视频中文| 欧美激情二区三区| 91久久综合亚洲鲁鲁五月天| 久久中文字幕一区| 久久成人精品无人区| 国产欧美日韩不卡| 午夜精品免费在线| 亚洲视频在线播放| 夜夜嗨av一区二区三区网页| 亚洲欧美卡通另类91av| 亚洲人成网站999久久久综合| 久久成人免费视频| 国产精品一二三四| 欧美有码视频| 亚洲欧美日本精品| 国产欧美日韩视频在线观看| 校园激情久久| 小嫩嫩精品导航| 国产伊人精品| 久久伊人一区二区| 久久综合中文| 亚洲国产精品毛片| 亚洲激情影院| 欧美日韩一区二区三区在线看| 日韩视频免费观看| 99视频在线观看一区三区| 国产精品yjizz| 性做久久久久久久久| 午夜精品婷婷| 亚洲成人在线视频播放| 亚洲国产日韩欧美在线图片| 欧美日韩免费一区| 久久国产视频网| 久久天天躁夜夜躁狠狠躁2022 | 欧美在线观看你懂的| 亚洲在线日韩| 狠狠爱综合网| 亚洲国产日韩欧美在线99 | 1024成人| 亚洲精品美女在线| 国产精品看片你懂得| 久久久噜噜噜久久中文字幕色伊伊 | 欧美成在线观看| 欧美日韩大片| 久久国内精品视频| 免费在线亚洲| 亚洲欧美另类在线| 久久露脸国产精品| 亚洲无人区一区| 久久精品99久久香蕉国产色戒 | 久久久av毛片精品| 男人的天堂亚洲| 欧美一级播放| 免费亚洲一区二区| 欧美伊人久久| 欧美精品一区二区在线播放| 久久精品99无色码中文字幕| 久久综合亚洲社区| 欧美中文字幕不卡| 欧美国产三级| 玖玖国产精品视频| 国产精品对白刺激久久久| 性色av香蕉一区二区| 欧美制服丝袜第一页| 亚洲电影免费观看高清完整版在线 | 国产精品激情| 亚洲第一页在线| 国产视频观看一区| 一本色道久久综合亚洲精品高清| 一区在线播放| 午夜国产精品影院在线观看| 亚洲人成毛片在线播放| 久久精品盗摄| 久久高清福利视频| 国产精品日韩一区| av成人福利| 亚洲美女精品一区| 免费欧美在线视频| 亚洲高清资源| 亚洲欧洲精品一区二区精品久久久 | 欧美久久影院| 欧美激情国产日韩| 在线观看成人小视频| 久久成人精品视频| 久久精品视频免费播放| 国产精品视频yy9299一区| 日韩一本二本av| 亚洲午夜一区二区三区| 欧美日韩在线免费观看| 日韩一级二级三级| 亚洲天堂av电影| 欧美日韩亚洲一区二区三区四区| 亚洲国产日韩一区| 亚洲区一区二区三区| 欧美1区2区| 91久久精品国产91久久| 亚洲日本在线视频观看| 欧美激情五月| 夜夜嗨av色综合久久久综合网| 一区二区三区四区五区精品| 欧美日韩国产小视频| 一区二区三区精品久久久| 亚洲欧美成人精品| 国产精品视频yy9099| 欧美中文在线观看国产| 暖暖成人免费视频| 99视频国产精品免费观看| 欧美午夜激情小视频| 西瓜成人精品人成网站| 蜜桃av一区二区在线观看| 91久久中文| 国产精品免费电影| 久久久国产精品一区二区中文| 欧美3dxxxxhd| 中日韩美女免费视频网址在线观看 | 欧美一区中文字幕| 欧美激情在线观看| 亚洲自拍偷拍福利| 极品裸体白嫩激情啪啪国产精品| 久久久久久免费| 亚洲日韩第九十九页| 亚洲欧美日韩直播| 国产精品你懂得| 欧美激情亚洲国产| 亚洲一区在线播放| 国产亚洲免费的视频看| 蜜桃久久av一区| 亚洲视频在线观看免费| 欧美a级理论片| 亚洲一本大道在线| 激情成人亚洲| 欧美亚州在线观看| 久久影视三级福利片| 中文精品99久久国产香蕉| 免费成人毛片| 午夜精品免费在线| 亚洲免费高清视频| 激情丁香综合| 国产精品人人爽人人做我的可爱| 久久久蜜桃一区二区人| 中文久久精品| 亚洲激情网站| 麻豆精品国产91久久久久久| 亚洲欧美变态国产另类| 夜夜精品视频一区二区| 亚洲丰满少妇videoshd| 国产亚洲欧美日韩精品| 国产精品黄视频| 欧美精品v日韩精品v韩国精品v| 欧美一区二区视频网站| 亚洲视频在线播放| 夜夜爽99久久国产综合精品女不卡 | 亚洲国产精品成人精品|