锘??xml version="1.0" encoding="utf-8" standalone="yes"?> System I/O can be blocking, or non-blocking synchronous, or non-blocking asynchronous [1, 2]. Blocking I/O means that the calling system does not return control to the caller until the operation is finished. As a result, the caller is blocked and cannot perform other activities during that time. Most important, the caller thread cannot be reused for other request processing while waiting for the I/O to complete, and becomes a wasted resource during that time. For example, a By contrast, a non-blocking synchronous call returns control to the caller immediately. The caller is not made to wait, and the invoked system immediately returns one of two responses: If the call was executed and the results are ready, then the caller is told of that. Alternatively, the invoked system can tell the caller that the system has no resources (no data in the socket) to perform the requested action. In that case, it is the responsibility of the caller may repeat the call until it succeeds. For example, a In a non-blocking asynchronous call, the calling function returns control to the caller immediately, reporting that the requested action was started. The calling system will execute the caller's request using additional system resources/threads and will notify the caller (by callback for example), when the result is ready for processing. For example, a Windows This article investigates different non-blocking I/O multiplexing mechanisms and proposes a single multi-platform design pattern/solution. We hope that this article will help developers of high performance TCP based servers to choose optimal design solution. We also compare the performance of Java, C# and C++ implementations of proposed and existing solutions. We will exclude the blocking approach from further discussion and comparison at all, as it the least effective approach for scalability and performance. In general, I/O multiplexing mechanisms rely on an event demultiplexor [1, 3], an object that dispatches I/O events from a limited number of sources to the appropriate read/write event handlers. The developer registers interest in specific events and provides event handlers, or callbacks. The event demultiplexor delivers the requested events to the event handlers. Two patterns that involve event demultiplexors are called Reactor and Proactor [1]. The Reactor patterns involve synchronous I/O, whereas the Proactor pattern involves asynchronous I/O. In Reactor, the event demultiplexor waits for events that indicate when a file descriptor or socket is ready for a read or write operation. The demultiplexor passes this event to the appropriate handler, which is responsible for performing the actual read or write. In the Proactor pattern, by contrast, the handler—or the event demultiplexor on behalf of the handler—initiates asynchronous read and write operations. The I/O operation itself is performed by the operating system (OS). The parameters passed to the OS include the addresses of user-defined data buffers from which the OS gets data to write, or to which the OS puts data read. The event demultiplexor waits for events that indicate the completion of the I/O operation, and forwards those events to the appropriate handlers. For example, on Windows a handler could initiate async I/O (overlapped in Microsoft terminology) operations, and the event demultiplexor could wait for IOCompletion events [1]. The implementation of this classic asynchronous pattern is based on an asynchronous OS-level API, and we will call this implementation the "system-level" or "true" async, because the application fully relies on the OS to execute actual I/O. An example will help you understand the difference between Reactor and Proactor. We will focus on the read operation here, as the write implementation is similar. Here's a read in Reactor: By comparison, here is a read operation in Proactor (true async): The open-source C++ development framework ACE [1, 3] developed by Douglas Schmidt, et al., offers a wide range of platform-independent, low-level concurrency support classes (threading, mutexes, etc). On the top level it provides two separate groups of classes: implementations of the ACE Reactor and ACE Proactor. Although both of them are based on platform-independent primitives, these tools offer different interfaces. The ACE Proactor gives much better performance and robustness on MS-Windows, as Windows provides a very efficient async API, based on operating-system-level support [4, 5]. Unfortunately, not all operating systems provide full robust async OS-level support. For instance, many Unix systems do not. Therefore, ACE Reactor is a preferable solution in UNIX (currently UNIX does not have robust async facilities for sockets). As a result, to achieve the best performance on each system, developers of networked applications need to maintain two separate code-bases: an ACE Proactor based solution on Windows and an ACE Reactor based solution for Unix-based systems. As we mentioned, the true async Proactor pattern requires operating-system-level support. Due to the differing nature of event handler and operating-system interaction, it is difficult to create common, unified external interfaces for both Reactor and Proactor patterns. That, in turn, makes it hard to create a fully portable development framework and encapsulate the interface and OS- related differences. In this section, we will propose a solution to the challenge of designing a portable framework for the Proactor and Reactor I/O patterns. To demonstrate this solution, we will transform a Reactor demultiplexor I/O solution to an emulated async I/O by moving read/write operations from event handlers inside the demultiplexor (this is "emulated async" approach). The following example illustrates that conversion for a read operation: As we can see, by adding functionality to the demultiplexor I/O pattern, we were able to convert the Reactor pattern to a Proactor pattern. In terms of the amount of work performed, this approach is exactly the same as the Reactor pattern. We simply shifted responsibilities between different actors. There is no performance degradation because the amount of work performed is still the same. The work was simply performed by different actors. The following lists of steps demonstrate that each approach performs an equal amount of work: Standard/classic Reactor: Proposed emulated Proactor: With an operating system that does not provide an async I/O API, this approach allows us to hide the reactive nature of available socket APIs and to expose a fully proactive async interface. This allows us to create a fully portable platform-independent solution with a common external interface. The proposed solution (TProactor) was developed and implemented at Terabit P/L [6]. The solution has two alternative implementations, one in C++ and one in Java. The C++ version was built using ACE cross-platform low-level primitives and has a common unified async proactive interface on all platforms. The main TProactor components are the Engine and WaitStrategy interfaces. Engine manages the async operations lifecycle. WaitStrategy manages concurrency strategies. WaitStrategy depends on Engine and the two always work in pairs. Interfaces between Engine and WaitStrategy are strongly defined. Engines and waiting strategies are implemented as pluggable class-drivers (for the full list of all implemented Engines and corresponding WaitStrategies, see Appendix 1). TProactor is a highly configurable solution. It internally implements three engines (POSIX AIO, SUN AIO and Emulated AIO) and hides six different waiting strategies, based on an asynchronous kernel API (for POSIX- this is not efficient right now due to internal POSIX AIO API problems) and synchronous Unix With a set of mutually interchangeable "lego-style" Engines and WaitStrategies, a developer can choose the appropriate internal mechanism (engine and waiting strategy) at run time by setting appropriate configuration parameters. These settings may be specified according to specific requirements, such as the number of connections, scalability, and the targeted OS. If the operating system supports async API, a developer may use the true async approach, otherwise the user can opt for an emulated async solutions built on different sync waiting strategies. All of those strategies are hidden behind an emulated async façade. For an HTTP server running on Sun Solaris, for example, the /dev/poll or In terms of performance, our tests show that emulating from reactive to proactive does not impose any overhead—it can be faster, but not slower. According to our test results, the TProactor gives on average of up to 10-35 % better performance (measured in terms of both throughput and response times) than the reactive model in the standard ACE Reactor implementation on various UNIX/Linux platforms. On Windows it gives the same performance as standard ACE Proactor. In addition to C++, as we also implemented TProactor in Java. As for JDK version 1.4, Java provides only the sync-based approach that is logically similar to C Figures 1 and 2 chart the transfer rate in bits/sec versus the number of connections. These charts represent comparison results for a simple echo-server built on standard ACE Reactor, using RedHat Linux 9.0, TProactor C++ and Java (IBM 1.4JVM) on Microsoft's Windows and RedHat Linux9.0, and a C# echo-server running on the Windows operating system. Performance of native AIO APIs is represented by "Async"-marked curves; by emulated AIO (TProactor)—AsyncE curves; and by TP_Reactor—Synch curves. All implementations were bombarded by the same client application—a continuous stream of arbitrary fixed sized messages via N connections. The full set of tests was performed on the same hardware. Tests on different machines proved that relative results are consistent. The following is the skeleton of a simple TProactor-based Java echo-server. In a nutshell, the developer only has to implement the two interfaces: TProactor provides a common, flexible, and configurable solution for multi-platform high- performance communications development. All of the problems and complexities mentioned in Appendix 2, are hidden from the developer. It is clear from the charts that C++ is still the preferable approach for high performance communication solutions, but Java on Linux comes quite close. However, the overall Java performance was weakened by poor results on Windows. One reason for that may be that the Java 1.4 nio package is based on Note. All tests for Java are performed on "raw" buffers (java.nio.ByteBuffer) without data processing. Taking into account the latest activities to develop robust AIO on Linux [9], we can conclude that Linux Kernel API (io_xxxx set of system calls) should be more scalable in comparison with POSIX standard, but still not portable. In this case, TProactor with new Engine/Wait Strategy pair, based on native LINUX AIO can be easily implemented to overcome portability issues and to cover Linux native AIO with standard ACE Proactor interface. Engines and waiting strategies implemented in TProactor All sync waiting strategies can be divided into two groups: Let us describe some common logical problems for those groups: [1] Douglas C. Schmidt, Stephen D. Huston "C++ Network Programming." 2002, Addison-Wesley ISBN 0-201-60464-7 [2] W. Richard Stevens "UNIX Network Programming" vol. 1 and 2, 1999, Prentice Hill, ISBN 0-13- 490012-X [3] Douglas C. Schmidt, Michael Stal, Hans Rohnert, Frank Buschmann "Pattern-Oriented Software Architecture: Patterns for Concurrent and Networked Objects, Volume 2" Wiley & Sons, NY 2000 [4] INFO: Socket Overlapped I/O Versus Blocking/Non-blocking Mode. Q181611. Microsoft Knowledge Base Articles. [5] Microsoft MSDN. I/O Completion Ports. [6] TProactor (ACE compatible Proactor). [7] JavaDoc java.nio.channels [8] JavaDoc Java.nio.channels.spi Class SelectorProvider [9] Linux AIO development See Also: Ian Barile "I/O Multiplexing & Scalable Socket Servers", 2004 February, DDJ Further reading on event handling The Adaptive Communication Environment Terabit Solutions Alex Libman has been programming for 15 years. During the past 5 years his main area of interest is pattern-oriented multiplatform networked programming using C++ and Java. He is big fan and contributor of ACE. Vlad Gilbourd works as a computer consultant, but wishes to spend more time listening jazz :) As a hobby, he started and runswww.corporatenews.com.au website. System I/O can be blocking, or non-blocking synchronous, or non-blocking asynchronous [1, 2]. Blocking I/O means that the calling system does not return control to the caller until the operation is finished. As a result, the caller is blocked and cannot perform other activities during that time. Most important, the caller thread cannot be reused for other request processing while waiting for the I/O to complete, and becomes a wasted resource during that time. For example, a By contrast, a non-blocking synchronous call returns control to the caller immediately. The caller is not made to wait, and the invoked system immediately returns one of two responses: If the call was executed and the results are ready, then the caller is told of that. Alternatively, the invoked system can tell the caller that the system has no resources (no data in the socket) to perform the requested action. In that case, it is the responsibility of the caller may repeat the call until it succeeds. For example, a In a non-blocking asynchronous call, the calling function returns control to the caller immediately, reporting that the requested action was started. The calling system will execute the caller's request using additional system resources/threads and will notify the caller (by callback for example), when the result is ready for processing. For example, a Windows This article investigates different non-blocking I/O multiplexing mechanisms and proposes a single multi-platform design pattern/solution. We hope that this article will help developers of high performance TCP based servers to choose optimal design solution. We also compare the performance of Java, C# and C++ implementations of proposed and existing solutions. We will exclude the blocking approach from further discussion and comparison at all, as it the least effective approach for scalability and performance. In general, I/O multiplexing mechanisms rely on an event demultiplexor [1, 3], an object that dispatches I/O events from a limited number of sources to the appropriate read/write event handlers. The developer registers interest in specific events and provides event handlers, or callbacks. The event demultiplexor delivers the requested events to the event handlers. Two patterns that involve event demultiplexors are called Reactor and Proactor [1]. The Reactor patterns involve synchronous I/O, whereas the Proactor pattern involves asynchronous I/O. In Reactor, the event demultiplexor waits for events that indicate when a file descriptor or socket is ready for a read or write operation. The demultiplexor passes this event to the appropriate handler, which is responsible for performing the actual read or write. In the Proactor pattern, by contrast, the handler鈥攐r the event demultiplexor on behalf of the handler鈥攊nitiates asynchronous read and write operations. The I/O operation itself is performed by the operating system (OS). The parameters passed to the OS include the addresses of user-defined data buffers from which the OS gets data to write, or to which the OS puts data read. The event demultiplexor waits for events that indicate the completion of the I/O operation, and forwards those events to the appropriate handlers. For example, on Windows a handler could initiate async I/O (overlapped in Microsoft terminology) operations, and the event demultiplexor could wait for IOCompletion events [1]. The implementation of this classic asynchronous pattern is based on an asynchronous OS-level API, and we will call this implementation the "system-level" or "true" async, because the application fully relies on the OS to execute actual I/O. An example will help you understand the difference between Reactor and Proactor. We will focus on the read operation here, as the write implementation is similar. Here's a read in Reactor: By comparison, here is a read operation in Proactor (true async): The open-source C++ development framework ACE [1, 3] developed by Douglas Schmidt, et al., offers a wide range of platform-independent, low-level concurrency support classes (threading, mutexes, etc). On the top level it provides two separate groups of classes: implementations of the ACE Reactor and ACE Proactor. Although both of them are based on platform-independent primitives, these tools offer different interfaces. The ACE Proactor gives much better performance and robustness on MS-Windows, as Windows provides a very efficient async API, based on operating-system-level support [4, 5]. Unfortunately, not all operating systems provide full robust async OS-level support. For instance, many Unix systems do not. Therefore, ACE Reactor is a preferable solution in UNIX (currently UNIX does not have robust async facilities for sockets). As a result, to achieve the best performance on each system, developers of networked applications need to maintain two separate code-bases: an ACE Proactor based solution on Windows and an ACE Reactor based solution for Unix-based systems. As we mentioned, the true async Proactor pattern requires operating-system-level support. Due to the differing nature of event handler and operating-system interaction, it is difficult to create common, unified external interfaces for both Reactor and Proactor patterns. That, in turn, makes it hard to create a fully portable development framework and encapsulate the interface and OS- related differences. In this section, we will propose a solution to the challenge of designing a portable framework for the Proactor and Reactor I/O patterns. To demonstrate this solution, we will transform a Reactor demultiplexor I/O solution to an emulated async I/O by moving read/write operations from event handlers inside the demultiplexor (this is "emulated async" approach). The following example illustrates that conversion for a read operation: As we can see, by adding functionality to the demultiplexor I/O pattern, we were able to convert the Reactor pattern to a Proactor pattern. In terms of the amount of work performed, this approach is exactly the same as the Reactor pattern. We simply shifted responsibilities between different actors. There is no performance degradation because the amount of work performed is still the same. The work was simply performed by different actors. The following lists of steps demonstrate that each approach performs an equal amount of work: Standard/classic Reactor: Proposed emulated Proactor: With an operating system that does not provide an async I/O API, this approach allows us to hide the reactive nature of available socket APIs and to expose a fully proactive async interface. This allows us to create a fully portable platform-independent solution with a common external interface. The proposed solution (TProactor) was developed and implemented at Terabit P/L [6]. The solution has two alternative implementations, one in C++ and one in Java. The C++ version was built using ACE cross-platform low-level primitives and has a common unified async proactive interface on all platforms. The main TProactor components are the Engine and WaitStrategy interfaces. Engine manages the async operations lifecycle. WaitStrategy manages concurrency strategies. WaitStrategy depends on Engine and the two always work in pairs. Interfaces between Engine and WaitStrategy are strongly defined. Engines and waiting strategies are implemented as pluggable class-drivers (for the full list of all implemented Engines and corresponding WaitStrategies, see Appendix 1). TProactor is a highly configurable solution. It internally implements three engines (POSIX AIO, SUN AIO and Emulated AIO) and hides six different waiting strategies, based on an asynchronous kernel API (for POSIX- this is not efficient right now due to internal POSIX AIO API problems) and synchronous Unix With a set of mutually interchangeable "lego-style" Engines and WaitStrategies, a developer can choose the appropriate internal mechanism (engine and waiting strategy) at run time by setting appropriate configuration parameters. These settings may be specified according to specific requirements, such as the number of connections, scalability, and the targeted OS. If the operating system supports async API, a developer may use the true async approach, otherwise the user can opt for an emulated async solutions built on different sync waiting strategies. All of those strategies are hidden behind an emulated async façade. For an HTTP server running on Sun Solaris, for example, the /dev/poll or In terms of performance, our tests show that emulating from reactive to proactive does not impose any overhead鈥攊t can be faster, but not slower. According to our test results, the TProactor gives on average of up to 10-35 % better performance (measured in terms of both throughput and response times) than the reactive model in the standard ACE Reactor implementation on various UNIX/Linux platforms. On Windows it gives the same performance as standard ACE Proactor. In addition to C++, as we also implemented TProactor in Java. As for JDK version 1.4, Java provides only the sync-based approach that is logically similar to C Figures 1 and 2 chart the transfer rate in bits/sec versus the number of connections. These charts represent comparison results for a simple echo-server built on standard ACE Reactor, using RedHat Linux 9.0, TProactor C++ and Java (IBM 1.4JVM) on Microsoft's Windows and RedHat Linux9.0, and a C# echo-server running on the Windows operating system. Performance of native AIO APIs is represented by "Async"-marked curves; by emulated AIO (TProactor)鈥擜syncE curves; and by TP_Reactor鈥擲ynch curves. All implementations were bombarded by the same client application鈥攁 continuous stream of arbitrary fixed sized messages via N connections. The full set of tests was performed on the same hardware. Tests on different machines proved that relative results are consistent. The following is the skeleton of a simple TProactor-based Java echo-server. In a nutshell, the developer only has to implement the two interfaces: TProactor provides a common, flexible, and configurable solution for multi-platform high- performance communications development. All of the problems and complexities mentioned in Appendix 2, are hidden from the developer. It is clear from the charts that C++ is still the preferable approach for high performance communication solutions, but Java on Linux comes quite close. However, the overall Java performance was weakened by poor results on Windows. One reason for that may be that the Java 1.4 nio package is based on Note. All tests for Java are performed on "raw" buffers (java.nio.ByteBuffer) without data processing. Taking into account the latest activities to develop robust AIO on Linux [9], we can conclude that Linux Kernel API (io_xxxx set of system calls) should be more scalable in comparison with POSIX standard, but still not portable. In this case, TProactor with new Engine/Wait Strategy pair, based on native LINUX AIO can be easily implemented to overcome portability issues and to cover Linux native AIO with standard ACE Proactor interface. Engines and waiting strategies implemented in TProactor All sync waiting strategies can be divided into two groups: Let us describe some common logical problems for those groups: [1] Douglas C. Schmidt, Stephen D. Huston "C++ Network Programming." 2002, Addison-Wesley ISBN 0-201-60464-7 [2] W. Richard Stevens "UNIX Network Programming" vol. 1 and 2, 1999, Prentice Hill, ISBN 0-13- 490012-X [3] Douglas C. Schmidt, Michael Stal, Hans Rohnert, Frank Buschmann "Pattern-Oriented Software Architecture: Patterns for Concurrent and Networked Objects, Volume 2" Wiley & Sons, NY 2000 [4] INFO: Socket Overlapped I/O Versus Blocking/Non-blocking Mode. Q181611. Microsoft Knowledge Base Articles. [5] Microsoft MSDN. I/O Completion Ports. [6] TProactor (ACE compatible Proactor). [7] JavaDoc java.nio.channels [8] JavaDoc Java.nio.channels.spi Class SelectorProvider [9] Linux AIO development See Also: Ian Barile "I/O Multiplexing & Scalable Socket Servers", 2004 February, DDJ Further reading on event handling The Adaptive Communication Environment Terabit Solutions Alex Libman has been programming for 15 years. During the past 5 years his main area of interest is pattern-oriented multiplatform networked programming using C++ and Java. He is big fan and contributor of ACE. Vlad Gilbourd works as a computer consultant, but wishes to spend more time listening jazz :) As a hobby, he started and runswww.corporatenews.com.au website.
November 25, 2005 read() operation on a socket in blocking mode will not return control if the socket buffer is empty until some data becomes available.read() operation on a socket in non-blocking mode may return the number of read bytes or a special return code -1 with errno set to EWOULBLOCK/EAGAIN, meaning "not ready; try again later."ReadFile() or POSIX aio_read() API returns immediately and initiates an internal system read operation. Of the three approaches, this non-blocking asynchronous approach offers the best scalability and performance.Reactor and Proactor: two I/O multiplexing approaches
Current practice
Proposed solution
select());TProactor
select(), poll(), /dev/poll (Solaris 5.8+), port_get (Solaris 5.10), RealTime (RT) signals (Linux 2.4+), epoll (Linux 2.6), k-queue (FreeBSD) APIs. TProactor conforms to the standard ACE Proactor implementation interface. That makes it possible to develop a single cross-platform solution (POSIX/MS-WINDOWS) with a common (ACE Proactor) interface.port_get()-based engines is the most suitable choice, able to serve huge number of connections, but for another UNIX solution with a limited number of connections but high throughput requirements, aselect()-based engine may be a better approach. Such flexibility cannot be achieved with a standard ACE Reactor/Proactor, due to inherent algorithmic problems of different wait strategies (see Appendix 2).Performance comparison (JAVA versus C++ versus C#).
select() [7, 8]. Java TProactor is based on Java's non-blocking facilities (java.nio packages) logically similar to C++ TProactor with waiting strategy based on select().

User code example
OpRead with buffer where TProactor puts its read results, and OpWrite with a buffer from which TProactor takes data. The developer will also need to implement protocol-specific logic via providing callbacks onReadCompleted() and onWriteCompleted() in the AsynchHandler interface implementation. Those callbacks will be asynchronously called by TProactor on completion of read/write operations and executed on a thread pool space provided by TProactor (the developer doesn't need to write his own pool).IOHandler is a TProactor base class. AsynchHandler and Multiplexor, among other things, internally execute the wait strategy chosen by the developer.Conclusion
select()-style API. 錕?It is true, Java NIO package is kind of Reactor pattern based on select()-style API (see [7, 8]). Java NIO allows to write your own select()-style provider (equivalent of TProactor waiting strategies). Looking at Java NIO implementation for Windows (to do this enough to examine import symbols in jdk1.5.0\jre\bin\nio.dll), we can make a conclusion that Java NIO 1.4.2 and 1.5.0 for Windows is based on WSAEventSelect () API. That is better than select(), but slower than IOCompletionPort錕絪 for significant number of connections. . Should the 1.5 version of Java's nio be based on IOCompletionPorts, then that should improve performance. If Java NIO would use IOCompletionPorts, than conversion of Proactor pattern to Reactor pattern should be made inside nio.dll. Although such conversion is more complicated than Reactor- >Proactor conversion, but it can be implemented in frames of Java NIO interfaces. (this the topic of next arcticle, but we can provide algorithm). At this time, no TProactor performance tests were done on JDK 1.5.Appendix I
Engine Type Wait Strategies Operating System POSIX_AIO (true async) aio_read()/aio_write()aio_suspend()
Waiting for RT signal
Callback functionPOSIX complained UNIX (not robust)
POSIX (not robust)
SGI IRIX, LINUX (not robust)SUN_AIO (true async) aio_read()/aio_write()aio_wait()SUN (not robust) Emulated Async
Non-blocking read()/write()select()poll()
/dev/poll
Linux RT signals
Kqueuegeneric POSIX
Mostly all POSIX implementations
SUN
Linux
FreeBSDAppendix II
select(), poll(), /dev/poll)—readiness at any time.Resources
http://msdn.microsoft.com/library/default.asp?url=/library/en- us/fileio/fs/i_o_completion_ports.asp
www.terabit.com.au
http://java.sun.com/j2se/1.4.2/docs/api/java/nio/channels/package-summary.html
http://java.sun.com/j2se/1.4.2/docs/api/java/nio/channels/spi/SelectorProvider.html
http://lse.sourceforge.net/io/aio.html, and
http://archive.linuxsymposium.org/ols2003/Proceedings/All-Reprints/Reprint-Pulavarty-OLS2003.pdf
- http://www.cs.wustl.edu/~schmidt/ACE-papers.html
http://www.cs.wustl.edu/~schmidt/ACE.html
http://terabit.com.au/solutions.phpAbout the authors
from:
http://www.artima.com/articles/io_design_patterns.html
]]>read() operation on a socket in blocking mode will not return control if the socket buffer is empty until some data becomes available.read() operation on a socket in non-blocking mode may return the number of read bytes or a special return code -1 with errno set to EWOULBLOCK/EAGAIN, meaning "not ready; try again later."ReadFile() or POSIX aio_read() API returns immediately and initiates an internal system read operation. Of the three approaches, this non-blocking asynchronous approach offers the best scalability and performance.Current practice
Proposed solution
select());TProactor
select(), poll(), /dev/poll (Solaris 5.8+), port_get (Solaris 5.10), RealTime (RT) signals (Linux 2.4+), epoll (Linux 2.6), k-queue (FreeBSD) APIs. TProactor conforms to the standard ACE Proactor implementation interface. That makes it possible to develop a single cross-platform solution (POSIX/MS-WINDOWS) with a common (ACE Proactor) interface.port_get()-based engines is the most suitable choice, able to serve huge number of connections, but for another UNIX solution with a limited number of connections but high throughput requirements, a select()-based engine may be a better approach. Such flexibility cannot be achieved with a standard ACE Reactor/Proactor, due to inherent algorithmic problems of different wait strategies (see Appendix 2).Performance comparison (JAVA versus C++ versus C#).
select() [7, 8]. Java TProactor is based on Java's non-blocking facilities (java.nio packages) logically similar to C++ TProactor with waiting strategy based on select().

User code example
OpRead with buffer where TProactor puts its read results, and OpWrite with a buffer from which TProactor takes data. The developer will also need to implement protocol-specific logic via providing callbacks onReadCompleted() and onWriteCompleted() in the AsynchHandlerinterface implementation. Those callbacks will be asynchronously called by TProactor on completion of read/write operations and executed on a thread pool space provided by TProactor (the developer doesn't need to write his own pool).class EchoServerProtocol implements AsynchHandler
{
AsynchChannel achannel = null;
EchoServerProtocol( Demultiplexor m, SelectableChannel channel ) throws Exception
{
this.achannel = new AsynchChannel( m, this, channel );
}
public void start() throws Exception
{
// called after construction
System.out.println( Thread.currentThread().getName() + ": EchoServer protocol started" );
achannel.read( buffer);
}
public void onReadCompleted( OpRead opRead ) throws Exception
{
if ( opRead.getError() != null )
{
// handle error, do clean-up if needed
System.out.println( "EchoServer::readCompleted: " + opRead.getError().toString());
achannel.close();
return;
}
if ( opRead.getBytesCompleted () <= 0)
{
System.out.println( "EchoServer::readCompleted: Peer closed " + opRead.getBytesCompleted();
achannel.close();
return;
}
ByteBuffer buffer = opRead.getBuffer();
achannel.write(buffer);
}
public void onWriteCompleted(OpWrite opWrite) throws Exception
{
// logically similar to onReadCompleted
...
}
}
IOHandler is a TProactor base class. AsynchHandler and Multiplexor, among other things, internally execute the wait strategy chosen by the developer.Conclusion
select()-style API. 錕?It is true, Java NIO package is kind of Reactor pattern based on select()-style API (see [7, 8]). Java NIO allows to write your own select()-style provider (equivalent of TProactor waiting strategies). Looking at Java NIO implementation for Windows (to do this enough to examine import symbols in jdk1.5.0\jre\bin\nio.dll), we can make a conclusion that Java NIO 1.4.2 and 1.5.0 for Windows is based on WSAEventSelect () API. That is better than select(), but slower than IOCompletionPort錕絪 for significant number of connections. . Should the 1.5 version of Java's nio be based on IOCompletionPorts, then that should improve performance. If Java NIO would use IOCompletionPorts, than conversion of Proactor pattern to Reactor pattern should be made inside nio.dll. Although such conversion is more complicated than Reactor- >Proactor conversion, but it can be implemented in frames of Java NIO interfaces. (this the topic of next arcticle, but we can provide algorithm). At this time, no TProactor performance tests were done on JDK 1.5.Appendix I
Engine Type Wait Strategies Operating System POSIX_AIO (true async) aio_read()/aio_write()aio_suspend()
Waiting for RT signal
Callback functionPOSIX complained UNIX (not robust)
POSIX (not robust)
SGI IRIX, LINUX (not robust)SUN_AIO (true async) aio_read()/aio_write()aio_wait()SUN (not robust) Emulated Async
Non-blocking read()/write()select()poll()
/dev/poll
Linux RT signals
Kqueuegeneric POSIX
Mostly all POSIX implementations
SUN
Linux
FreeBSDAppendix II
select(), poll(), /dev/poll)鈥攔eadiness at any time.Resources
http://msdn.microsoft.com/library/default.asp?url=/library/en- us/fileio/fs/i_o_completion_ports.asp
www.terabit.com.au
http://java.sun.com/j2se/1.4.2/docs/api/java/nio/channels/package-summary.html
http://java.sun.com/j2se/1.4.2/docs/api/java/nio/channels/spi/SelectorProvider.html
http://lse.sourceforge.net/io/aio.html, and
http://archive.linuxsymposium.org/ols2003/Proceedings/All-Reprints/Reprint-Pulavarty-OLS2003.pdf
- http://www.cs.wustl.edu/~schmidt/ACE-papers.html
http://www.cs.wustl.edu/~schmidt/ACE.html
http://terabit.com.au/solutions.phpAbout the authors
from:
http://www.artima.com/articles/io_design_patterns.html
}QSSOverlapped;//for per connection
鎴戜笅闈㈢殑server妗嗘灦鐨勫熀鏈濇兂鏄?
One connection VS one thread in worker thread pool ,worker thread performs completionWorkerRoutine.
A Acceptor thread 涓撻棬鐢ㄦ潵accept socket,鍏寵仈鑷矷OCP,騫禬SARecv:post Recv Completion Packet to IOCP.
鍦╟ompletionWorkerRoutine涓湁浠ヤ笅鐨勮亴璐?
1.handle request,褰撳繖鏃跺鍔燾ompletionWorkerThread鏁伴噺浣嗕笉瓚呰繃maxThreads,post Recv Completion Packet to IOCP.
2.timeout鏃舵鏌ユ槸鍚︾┖闂插拰褰撳墠completionWorkerThread鏁伴噺,褰撶┖闂叉椂淇濇寔鎴栧噺灝戣嚦minThreads鏁伴噺.
3.瀵規(guī)墍鏈堿ccepted-socket綆$悊鐢熷懡鍛ㄦ湡,榪欓噷鍒╃敤緋葷粺鐨刱eepalive probes,鑻ユ兂瀹炵幇涓氬姟灞?蹇冭煩鎺㈡祴"鍙渶灝哘SS_SIO_KEEPALIVE_VALS_TIMEOUT 鏀瑰洖緋葷粺榛樿鐨?灝忔椂.
涓嬮潰緇撳悎婧愪唬鐮?嫻呮瀽涓涓婭OCP:
socketserver.h
#ifndef __Q_SOCKET_SERVER__
#define __Q_SOCKET_SERVER__
#include <winsock2.h>
#include <mstcpip.h>
#define QSS_SIO_KEEPALIVE_VALS_TIMEOUT 30*60*1000
#define QSS_SIO_KEEPALIVE_VALS_INTERVAL 5*1000
#define MAX_THREADS 100
#define MAX_THREADS_MIN 10
#define MIN_WORKER_WAIT_TIMEOUT 20*1000
#define MAX_WORKER_WAIT_TIMEOUT 60*MIN_WORKER_WAIT_TIMEOUT
#define MAX_BUF_SIZE 1024
/*褰揂ccepted socket鍜宻ocket鍏抽棴鎴栧彂鐢熷紓甯告椂鍥炶皟CSocketLifecycleCallback*/
typedef void (*CSocketLifecycleCallback)(SOCKET cs,int lifecycle);//lifecycle:0:OnAccepted,-1:OnClose//娉ㄦ剰OnClose姝ゆ椂鐨剆ocket鏈繀鍙敤,鍙兘宸茬粡琚潪姝e父鍏抽棴鎴栧叾浠栧紓甯?
/*鍗忚澶勭悊鍥炶皟*/
typedef int (*InternalProtocolHandler)(LPWSAOVERLAPPED overlapped);//return -1:SOCKET_ERROR
typedef struct Q_SOCKET_SERVER SocketServer;
DWORD initializeSocketServer(SocketServer ** ssp,WORD passive,WORD port,CSocketLifecycleCallback cslifecb,InternalProtocolHandler protoHandler,WORD minThreads,WORD maxThreads,long workerWaitTimeout);
DWORD startSocketServer(SocketServer *ss);
DWORD shutdownSocketServer(SocketServer *ss);
#endif
qsocketserver.c 綆縐?qss,鐩稿簲鐨凮VERLAPPED綆縐皅ssOl.
#include "socketserver.h"
#include "stdio.h"
typedef struct {
WORD passive;//daemon
WORD port;
WORD minThreads;
WORD maxThreads;
volatile long lifecycleStatus;//0-created,1-starting, 2-running,3-stopping,4-exitKeyPosted,5-stopped
long workerWaitTimeout;//wait timeout
CRITICAL_SECTION QSS_LOCK;
volatile long workerCounter;
volatile long currentBusyWorkers;
volatile long CSocketsCounter;//Accepted-socket寮曠敤璁℃暟
CSocketLifecycleCallback cslifecb;
InternalProtocolHandler protoHandler;
WORD wsaVersion;//=MAKEWORD(2,0);
WSADATA wsData;
SOCKET server_s;
SOCKADDR_IN serv_addr;
HANDLE iocpHandle;
}QSocketServer;
typedef struct {
WSAOVERLAPPED overlapped;
SOCKET client_s;
SOCKADDR_IN client_addr;
WORD optCode;
char buf[MAX_BUF_SIZE];
WSABUF wsaBuf;
DWORD numberOfBytesTransferred;
DWORD flags;
}QSSOverlapped;
DWORD acceptorRoutine(LPVOID);
DWORD completionWorkerRoutine(LPVOID);
static void adjustQSSWorkerLimits(QSocketServer *qss){
/*adjust size and timeout.*/
/*if(qss->maxThreads <= 0) {
qss->maxThreads = MAX_THREADS;
} else if (qss->maxThreads < MAX_THREADS_MIN) {
qss->maxThreads = MAX_THREADS_MIN;
}
if(qss->minThreads > qss->maxThreads) {
qss->minThreads = qss->maxThreads;
}
if(qss->minThreads <= 0) {
if(1 == qss->maxThreads) {
qss->minThreads = 1;
} else {
qss->minThreads = qss->maxThreads/2;
}
}
if(qss->workerWaitTimeout<MIN_WORKER_WAIT_TIMEOUT)
qss->workerWaitTimeout=MIN_WORKER_WAIT_TIMEOUT;
if(qss->workerWaitTimeout>MAX_WORKER_WAIT_TIMEOUT)
qss->workerWaitTimeout=MAX_WORKER_WAIT_TIMEOUT; */
}
typedef struct{
QSocketServer * qss;
HANDLE th;
}QSSWORKER_PARAM;
static WORD addQSSWorker(QSocketServer *qss,WORD addCounter){
WORD res=0;
if(qss->workerCounter<qss->minThreads||(qss->currentBusyWorkers==qss->workerCounter&&qss->workerCounter<qss->maxThreads)){
DWORD threadId;
QSSWORKER_PARAM * pParam=NULL;
int i=0;
EnterCriticalSection(&qss->QSS_LOCK);
if(qss->workerCounter+addCounter<=qss->maxThreads)
for(;i<addCounter;i++)
{
pParam=malloc(sizeof(QSSWORKER_PARAM));
if(pParam){
pParam->th=CreateThread(NULL,0,(LPTHREAD_START_ROUTINE)completionWorkerRoutine,pParam,CREATE_SUSPENDED,&threadId);
pParam->qss=qss;
ResumeThread(pParam->th);
qss->workerCounter++,res++;
}
}
LeaveCriticalSection(&qss->QSS_LOCK);
}
return res;
}
static void SOlogger(const char * msg,SOCKET s,int clearup){
perror(msg);
if(s>0)
closesocket(s);
if(clearup)
WSACleanup();
}
static int _InternalEchoProtocolHandler(LPWSAOVERLAPPED overlapped){
QSSOverlapped *qssOl=(QSSOverlapped *)overlapped;
printf("numOfT:%d,WSARecvd:%s,\n",qssOl->numberOfBytesTransferred,qssOl->buf);
//Sleep(500);
return send(qssOl->client_s,qssOl->buf,qssOl->numberOfBytesTransferred,0);
}
DWORD initializeSocketServer(SocketServer ** ssp,WORD passive,WORD port,CSocketLifecycleCallback cslifecb,InternalProtocolHandler protoHandler,WORD minThreads,WORD maxThreads,long workerWaitTimeout){
QSocketServer * qss=malloc(sizeof(QSocketServer));
qss->passive=passive>0?1:0;
qss->port=port;
qss->minThreads=minThreads;
qss->maxThreads=maxThreads;
qss->workerWaitTimeout=workerWaitTimeout;
qss->wsaVersion=MAKEWORD(2,0);
qss->lifecycleStatus=0;
InitializeCriticalSection(&qss->QSS_LOCK);
qss->workerCounter=0;
qss->currentBusyWorkers=0;
qss->CSocketsCounter=0;
qss->cslifecb=cslifecb,qss->protoHandler=protoHandler;
if(!qss->protoHandler)
qss->protoHandler=_InternalEchoProtocolHandler;
adjustQSSWorkerLimits(qss);
*ssp=(SocketServer *)qss;
return 1;
}
DWORD startSocketServer(SocketServer *ss){
QSocketServer * qss=(QSocketServer *)ss;
if(qss==NULL||InterlockedCompareExchange(&qss->lifecycleStatus,1,0))
return 0;
qss->serv_addr.sin_family=AF_INET;
qss->serv_addr.sin_port=htons(qss->port);
qss->serv_addr.sin_addr.s_addr=INADDR_ANY;//inet_addr("127.0.0.1");
if(WSAStartup(qss->wsaVersion,&qss->wsData)){
/*榪欓噷榪樻湁涓彃鏇插氨鏄繖涓猈SAStartup琚皟鐢ㄧ殑鏃跺?瀹冨眳鐒朵細鍚姩涓鏉¢澶栫殑綰跨▼,褰撶劧紼嶅悗榪欐潯綰跨▼浼氳嚜鍔ㄩ鍑虹殑.涓嶇煡WSAClearup鍙堜細濡備綍?......*/
SOlogger("WSAStartup failed.\n",0,0);
return 0;
}
qss->server_s=socket(AF_INET,SOCK_STREAM,IPPROTO_IP);
if(qss->server_s==INVALID_SOCKET){
SOlogger("socket failed.\n",0,1);
return 0;
}
if(bind(qss->server_s,(LPSOCKADDR)&qss->serv_addr,sizeof(SOCKADDR_IN))==SOCKET_ERROR){
SOlogger("bind failed.\n",qss->server_s,1);
return 0;
}
if(listen(qss->server_s,SOMAXCONN)==SOCKET_ERROR)/*榪欓噷鏉ヨ皥璋?strong>backlog,寰堝浜轟笉鐭ラ亾璁炬垚浣曞?鎴戣鍒拌繃1,5,50,100鐨?鏈変漢璇磋瀹氱殑瓚婂ぇ瓚婅楄祫婧?鐨勭‘,榪欓噷璁炬垚SOMAXCONN涓嶄唬琛╳indows浼氱湡鐨勪嬌鐢⊿OMAXCONN,鑰屾槸" If set to SOMAXCONN, the underlying service provider responsible for socket s will set the backlog to a maximum reasonable value. "錛屽悓鏃跺湪鐜板疄鐜涓紝涓嶅悓鎿嶄綔緋葷粺鏀寔TCP緙撳啿闃熷垪鏈夋墍涓嶅悓錛屾墍浠ヨ繕涓嶅璁╂搷浣滅郴緇熸潵鍐沖畾瀹冪殑鍊箋傚儚Apache榪欑鏈嶅姟鍣細
#ifndef DEFAULT_LISTENBACKLOG
#define DEFAULT_LISTENBACKLOG 511
#endif
*/
{
SOlogger("listen failed.\n",qss->server_s,1);
return 0;
}
qss->iocpHandle=CreateIoCompletionPort(INVALID_HANDLE_VALUE,NULL,0,/*NumberOfConcurrentThreads-->*/qss->maxThreads);
//initialize worker for completion routine.
addQSSWorker(qss,qss->minThreads);
qss->lifecycleStatus=2;
{
QSSWORKER_PARAM * pParam=malloc(sizeof(QSSWORKER_PARAM));
pParam->qss=qss;
pParam->th=NULL;
if(qss->passive){
DWORD threadId;
pParam->th=CreateThread(NULL,0,(LPTHREAD_START_ROUTINE)acceptorRoutine,pParam,0,&threadId);
}else
return acceptorRoutine(pParam);
}
return 1;
}
DWORD shutdownSocketServer(SocketServer *ss){
QSocketServer * qss=(QSocketServer *)ss;
if(qss==NULL||InterlockedCompareExchange(&qss->lifecycleStatus,3,2)!=2)
return 0;
closesocket(qss->server_s/*listen-socket*/);//..other accepted-sockets associated with the listen-socket will not be closed,except WSACleanup is called..
if(qss->CSocketsCounter==0)
qss->lifecycleStatus=4,PostQueuedCompletionStatus(qss->iocpHandle,0,-1,NULL);
WSACleanup();
return 1;
}
DWORD acceptorRoutine(LPVOID ss){
QSSWORKER_PARAM * pParam=(QSSWORKER_PARAM *)ss;
QSocketServer * qss=pParam->qss;
HANDLE curThread=pParam->th;
QSSOverlapped *qssOl=NULL;
SOCKADDR_IN client_addr;
int client_addr_leng=sizeof(SOCKADDR_IN);
SOCKET cs;
free(pParam);
while(1){
printf("accept starting.....\n");
cs/*Accepted-socket*/=accept(qss->server_s,(LPSOCKADDR)&client_addr,&client_addr_leng);
if(cs==INVALID_SOCKET)
{
printf("accept failed:%d\n",GetLastError());
break;
}else{//SO_KEEPALIVE,SIO_KEEPALIVE_VALS 榪欓噷鏄埄鐢ㄧ郴緇熺殑"蹇冭煩鎺㈡祴",keepalive probes.linux:setsockopt,SOL_TCP:TCP_KEEPIDLE,TCP_KEEPINTVL,TCP_KEEPCNT
struct tcp_keepalive alive,aliveOut;
int so_keepalive_opt=1;
DWORD outDW;
if(!setsockopt(cs,SOL_SOCKET,SO_KEEPALIVE,(char *)&so_keepalive_opt,sizeof(so_keepalive_opt))){
alive.onoff=TRUE;
alive.keepalivetime=QSS_SIO_KEEPALIVE_VALS_TIMEOUT;
alive.keepaliveinterval=QSS_SIO_KEEPALIVE_VALS_INTERVAL;
if(WSAIoctl(cs,SIO_KEEPALIVE_VALS,&alive,sizeof(alive),&aliveOut,sizeof(aliveOut),&outDW,NULL,NULL)==SOCKET_ERROR){
printf("WSAIoctl SIO_KEEPALIVE_VALS failed:%d\n",GetLastError());
break;
}
}else{
printf("setsockopt SO_KEEPALIVE failed:%d\n",GetLastError());
break;
}
}
CreateIoCompletionPort((HANDLE)cs,qss->iocpHandle,cs,0);
if(qssOl==NULL){
qssOl=malloc(sizeof(QSSOverlapped));
}
qssOl->client_s=cs;
qssOl->wsaBuf.len=MAX_BUF_SIZE,qssOl->wsaBuf.buf=qssOl->buf,qssOl->numberOfBytesTransferred=0,qssOl->flags=0;//initialize WSABuf.
memset(&qssOl->overlapped,0,sizeof(WSAOVERLAPPED));
{
DWORD lastErr=GetLastError();
int ret=0;
SetLastError(0);
ret=WSARecv(cs,&qssOl->wsaBuf,1,&qssOl->numberOfBytesTransferred,&qssOl->flags,&qssOl->overlapped,NULL);
if(ret==0||(ret==SOCKET_ERROR&&GetLastError()==WSA_IO_PENDING)){
InterlockedIncrement(&qss->CSocketsCounter);//Accepted-socket璁℃暟閫掑.
if(qss->cslifecb)
qss->cslifecb(cs,0);
qssOl=NULL;
}
if(!GetLastError())
SetLastError(lastErr);
}
printf("accept flags:%d ,cs:%d.\n",GetLastError(),cs);
}//end while.
if(qssOl)
free(qssOl);
if(qss)
shutdownSocketServer((SocketServer *)qss);
if(curThread)
CloseHandle(curThread);
return 1;
}
static int postRecvCompletionPacket(QSSOverlapped * qssOl,int SOErrOccurredCode){
int SOErrOccurred=0;
DWORD lastErr=GetLastError();
SetLastError(0);
//SOCKET_ERROR:-1,WSA_IO_PENDING:997
if(WSARecv(qssOl->client_s,&qssOl->wsaBuf,1,&qssOl->numberOfBytesTransferred,&qssOl->flags,&qssOl->overlapped,NULL)==SOCKET_ERROR
&&GetLastError()!=WSA_IO_PENDING)//this case lastError maybe 64, 10054
{
SOErrOccurred=SOErrOccurredCode;
}
if(!GetLastError())
SetLastError(lastErr);
if(SOErrOccurred)
printf("worker[%d] postRecvCompletionPacket SOErrOccurred=%d,preErr:%d,postedErr:%d\n",GetCurrentThreadId(),SOErrOccurred,lastErr,GetLastError());
return SOErrOccurred;
}
DWORD completionWorkerRoutine(LPVOID ss){
QSSWORKER_PARAM * pParam=(QSSWORKER_PARAM *)ss;
QSocketServer * qss=pParam->qss;
HANDLE curThread=pParam->th;
QSSOverlapped * qssOl=NULL;
DWORD numberOfBytesTransferred=0;
ULONG_PTR completionKey=0;
int postRes=0,handleCode=0,exitCode=0,SOErrOccurred=0;
free(pParam);
while(!exitCode){
SetLastError(0);
if(GetQueuedCompletionStatus(qss->iocpHandle,&numberOfBytesTransferred,&completionKey,(LPOVERLAPPED *)&qssOl,qss->workerWaitTimeout)){
if(completionKey==-1&&qss->lifecycleStatus>=4)
{
printf("worker[%d] completionKey -1:%d \n",GetCurrentThreadId(),GetLastError());
if(qss->workerCounter>1)
PostQueuedCompletionStatus(qss->iocpHandle,0,-1,NULL);
exitCode=1;
break;
}
if(numberOfBytesTransferred>0){
InterlockedIncrement(&qss->currentBusyWorkers);
addQSSWorker(qss,1);
handleCode=qss->protoHandler((LPWSAOVERLAPPED)qssOl);
InterlockedDecrement(&qss->currentBusyWorkers);
if(handleCode>=0){
SOErrOccurred=postRecvCompletionPacket(qssOl,1);
}else
SOErrOccurred=2;
}else{
printf("worker[%d] numberOfBytesTransferred==0 ***** closesocket servS or cs *****,%d,%d ,ol is:%d\n",GetCurrentThreadId(),GetLastError(),completionKey,qssOl==NULL?0:1);
SOErrOccurred=3;
}
}else{ //GetQueuedCompletionStatus rtn FALSE, lastError 64 ,995[timeout worker thread exit.] ,WAIT_TIMEOUT:258
if(qssOl){
SOErrOccurred=postRecvCompletionPacket(qssOl,4);
}else {
printf("worker[%d] GetQueuedCompletionStatus F:%d \n",GetCurrentThreadId(),GetLastError());
if(GetLastError()!=WAIT_TIMEOUT){
exitCode=2;
}else{//wait timeout
if(qss->lifecycleStatus!=4&&qss->currentBusyWorkers==0&&qss->workerCounter>qss->minThreads){
EnterCriticalSection(&qss->QSS_LOCK);
if(qss->lifecycleStatus!=4&&qss->currentBusyWorkers==0&&qss->workerCounter>qss->minThreads){
qss->workerCounter--;//until qss->workerCounter decrease to qss->minThreads
exitCode=3;
}
LeaveCriticalSection(&qss->QSS_LOCK);
}
}
}
}//end GetQueuedCompletionStatus.
if(SOErrOccurred){
if(qss->cslifecb)
qss->cslifecb(qssOl->client_s,-1);
/*if(qssOl)*/{
closesocket(qssOl->client_s);
free(qssOl);
}
if(InterlockedDecrement(&qss->CSocketsCounter)==0&&qss->lifecycleStatus>=3){
//for qss workerSize,PostQueuedCompletionStatus -1
qss->lifecycleStatus=4,PostQueuedCompletionStatus(qss->iocpHandle,0,-1,NULL);
exitCode=4;
}
}
qssOl=NULL,numberOfBytesTransferred=0,completionKey=0,SOErrOccurred=0;//for net while.
}//end while.
//last to do
if(exitCode!=3){
int clearup=0;
EnterCriticalSection(&qss->QSS_LOCK);
if(!--qss->workerCounter&&qss->lifecycleStatus>=4){//clearup QSS
clearup=1;
}
LeaveCriticalSection(&qss->QSS_LOCK);
if(clearup){
DeleteCriticalSection(&qss->QSS_LOCK);
CloseHandle(qss->iocpHandle);
free(qss);
}
}
CloseHandle(curThread);
return 1;
}
------------------------------------------------------------------------------------------------------------------------
瀵逛簬IOCP鐨凩astError鐨勮鯨鍒拰澶勭悊鏄釜闅劇偣,鎵浠ヨ娉ㄦ剰鎴戠殑completionWorkerRoutine鐨剋hile緇撴瀯,
緇撴瀯濡備笅:
while(!exitCode){
if(completionKey==-1){...break;}
if(GetQueuedCompletionStatus){/*鍦ㄨ繖涓猧f浣撲腑鍙浣犳姇閫掔殑OVERLAPPED is not NULL,閭d箞榪欓噷浣犲緱鍒扮殑灝辨槸瀹?/strong>.*/
if(numberOfBytesTransferred>0){
/*鍦ㄨ繖閲宧andle request,璁板緱瑕佺戶緇姇閫掍綘鐨凮VERLAPPED鍝? */
}else{
/*榪欓噷鍙兘瀹㈡埛绔垨鏈嶅姟绔痗losesocket(the socket),浣嗘槸OVERLAPPED is not NULL,鍙浣犳姇閫掔殑涓嶄負NULL!*/
}
}else{/*鍦ㄨ繖閲岀殑if浣撲腑,铏界劧GetQueuedCompletionStatus return FALSE,浣嗘槸涓嶄唬琛∣VERLAPPED涓瀹氫負NULL.鐗瑰埆鏄疧VERLAPPED is not NULL鐨勬儏鍐典笅,涓嶈浠ヤ負LastError鍙戠敓浜?灝變唬琛ㄥ綋鍓嶇殑socket鏃犵敤鎴栧彂鐢熻嚧鍛界殑寮傚父,姣斿鍙戠敓lastError:995榪欑鎯呭喌涓嬫鏃剁殑socket鏈夊彲鑳芥槸涓鍒囨甯哥殑鍙敤鐨?浣犱笉搴旇鍏抽棴瀹?/strong>.*/
if(OVERLAPPED is not NULL){
/*榪欑鎯呭喌涓?璇蜂笉綆?7,21緇х畫鎶曢掑惂!鍦ㄦ姇閫掑悗鍐嶆嫻嬮敊璇?/strong>.*/
}else{
}
}
if(socket error occured){
}
prepare for next while.
}
琛屾枃浠撲績,闅懼厤鏈夐敊璇垨涓嶈凍涔嬪,甯屾湜澶у韙婅穬鎸囨璇勮,璋㈣阿!
榪欎釜妯″瀷鍦ㄦц兘涓婅繕鏄湁鏀硅繘鐨勭┖闂村摝錛?/strong>
from:
http://m.shnenglu.com/adapterofcoms/archive/2010/06/26/118781.aspx
TCP
鐨勪笁嬈℃彙鎵?/strong>鏄庝箞榪涜鐨勪簡錛氬彂閫佺鍙戦佷竴涓猄YN=1錛孉CK=0鏍囧織鐨勬暟鎹寘緇欐帴鏀剁錛岃姹傝繘琛岃繛鎺ワ紝榪欐槸絎竴嬈℃彙鎵嬶紱鎺ユ敹绔敹鍒拌
姹傚茍涓斿厑璁歌繛鎺ョ殑璇濓紝灝變細鍙戦佷竴涓猄YN=1錛孉CK=1鏍囧織鐨勬暟鎹寘緇欏彂閫佺錛屽憡璇夊畠錛屽彲浠ラ氳浜嗭紝騫朵笖璁╁彂閫佺鍙戦佷竴涓‘璁ゆ暟鎹寘錛岃繖鏄浜屾鎻℃墜錛?
鏈鍚庯紝鍙戦佺鍙戦佷竴涓猄YN=0錛孉CK=1鐨勬暟鎹寘緇欐帴鏀剁錛屽憡璇夊畠榪炴帴宸茶紜錛岃繖灝辨槸絎笁嬈℃彙鎵嬨備箣鍚庯紝涓涓猅CP榪炴帴寤虹珛錛屽紑濮嬮氳銆?/p>
*SYN錛氬悓姝ユ爣蹇?br style="line-height: normal;">鍚屾搴忓垪緙栧彿(Synchronize Sequence
Numbers)鏍忔湁鏁堛傝鏍囧織浠呭湪涓夋鎻℃墜寤虹珛TCP榪炴帴鏃舵湁鏁堛傚畠鎻愮ずTCP榪炴帴鐨勬湇鍔$媯鏌ュ簭鍒楃紪鍙鳳紝璇ュ簭鍒楃紪鍙蜂負TCP榪炴帴鍒濆绔?涓鑸槸瀹㈡埛
绔?鐨勫垵濮嬪簭鍒楃紪鍙楓傚湪榪欓噷錛屽彲浠ユ妸TCP搴忓垪緙栧彿鐪嬩綔鏄竴涓寖鍥翠粠0鍒?錛?94錛?67錛?95鐨?2浣嶈鏁板櫒銆傞氳繃TCP榪炴帴浜ゆ崲鐨勬暟鎹腑姣忎竴涓瓧
鑺傞兘緇忚繃搴忓垪緙栧彿銆傚湪TCP鎶ュご涓殑搴忓垪緙栧彿鏍忓寘鎷簡TCP鍒嗘涓涓涓瓧鑺傜殑搴忓垪緙栧彿銆?/p>
*ACK錛氱‘璁ゆ爣蹇?br style="line-height: normal;">紜緙栧彿(Acknowledgement
Number)鏍忔湁鏁堛傚ぇ澶氭暟鎯呭喌涓嬭鏍囧織浣嶆槸緗綅鐨勩俆CP鎶ュご鍐呯殑紜緙栧彿鏍忓唴鍖呭惈鐨勭‘璁ょ紪鍙?w+1錛孎igure-1)涓轟笅涓涓鏈熺殑搴忓垪緙栧彿錛?
鍚屾椂鎻愮ず榪滅緋葷粺宸茬粡鎴愬姛鎺ユ敹鎵鏈夋暟鎹?/p>
*RST錛氬浣嶆爣蹇?br style="line-height: normal;">澶嶄綅鏍囧織鏈夋晥銆傜敤浜庡浣嶇浉搴旂殑TCP榪炴帴銆?/p>
*URG錛氱揣鎬ユ爣蹇?br style="line-height: normal;">绱ф?The urgent pointer) 鏍囧織鏈夋晥銆傜揣鎬ユ爣蹇楃疆浣嶏紝 *PSH錛氭帹鏍囧織 *FIN錛氱粨鏉熸爣蹇?br style="line-height: normal;">甯︽湁璇ユ爣蹇楃疆浣嶇殑鏁版嵁鍖呯敤鏉ョ粨鏉熶竴涓猅CP鍥炶瘽錛屼絾瀵瑰簲绔彛浠嶅浜庡紑鏀劇姸鎬侊紝鍑嗗鎺ユ敹鍚庣畫鏁版嵁銆?/p>
=============================================================
璇?
鏍囧織緗綅鏃訛紝鎺ユ敹绔笉灝嗚鏁版嵁榪涜闃熷垪澶勭悊錛岃屾槸灝藉彲鑳藉揩灝嗘暟鎹漿鐢卞簲鐢ㄥ鐞嗐傚湪澶勭悊 telnet 鎴?rlogin
絳変氦浜掓ā寮忕殑榪炴帴鏃訛紝璇ユ爣蹇楁繪槸緗綅鐨勩?/p>
tcp鍗忚鏈?
韜槸鍙潬鐨?騫朵笉絳変簬搴旂敤紼嬪簭鐢╰cp鍙戦佹暟鎹氨涓瀹氭槸鍙潬鐨?涓嶇鏄惁闃誨,send鍙戦佺殑澶у皬,騫朵笉浠h〃瀵圭recv鍒板灝戠殑鏁版嵁. socklen_t
sendbuflen = 0;
鍦?span style="line-height: normal; color: #ff0000;">闃誨妯″紡涓?
send鍑芥暟鐨勮繃紼嬫槸灝嗗簲鐢ㄧ▼搴忚姹傚彂閫佺殑鏁版嵁鎷瘋礉鍒板彂閫佺紦瀛樹腑鍙戦佸茍寰楀埌紜鍚庡啀榪斿洖.浣嗙敱浜庡彂閫佺紦瀛樼殑瀛樺湪,琛ㄧ幇涓?濡傛灉鍙戦佺紦瀛樺ぇ灝忔瘮璇鋒眰鍙戦佺殑澶?
灝忚澶?閭d箞send鍑芥暟绔嬪嵆榪斿洖,鍚屾椂鍚戠綉緇滀腑鍙戦佹暟鎹?鍚﹀垯,send鍚戠綉緇滃彂閫佺紦瀛樹腑涓嶈兘瀹圭撼鐨勯偅閮ㄥ垎鏁版嵁,騫剁瓑寰呭绔‘璁ゅ悗鍐嶈繑鍥?鎺ユ敹绔彧瑕佸皢
鏁版嵁鏀跺埌鎺ユ敹緙撳瓨涓?灝變細紜,騫朵笉涓瀹氳絳夊緟搴旂敤紼嬪簭璋冪敤recv);
鍦?/span>闈為樆濉炴ā寮?/span>涓?send鍑芥暟鐨勮繃紼嬩粎浠呮槸灝嗘暟鎹嫹
璐濆埌鍗忚鏍堢殑緙撳瓨鍖鴻屽凡,濡傛灉緙撳瓨鍖哄彲鐢ㄧ┖闂翠笉澶?鍒欏敖鑳藉姏鐨勬嫹璐?榪斿洖鎴愬姛鎷瘋礉鐨勫ぇ灝?濡傜紦瀛樺尯鍙敤絀洪棿涓?,鍒欒繑鍥?1,鍚屾椂璁劇疆errno涓?
EAGAIN.
linux涓嬪彲鐢?span style="line-height: normal; color: #cc3333;">sysctl -a | grep net.ipv4.tcp_wmem鏌ョ湅緋葷粺榛?
璁ょ殑鍙戦佺紦瀛樺ぇ灝?
net.ipv4.tcp_wmem = 4096 16384 81920
榪?
鏈変笁涓?絎竴涓兼槸socket鐨勫彂閫佺紦瀛樺尯鍒嗛厤鐨勬渶灝戝瓧鑺傛暟,絎簩涓兼槸榛樿鍊?璇ュ間細琚玭et.core.wmem_default瑕嗙洊),緙撳瓨鍖?
鍦ㄧ郴緇熻礋杞戒笉閲嶇殑鎯呭喌涓嬪彲浠ュ闀垮埌榪欎釜鍊?絎笁涓兼槸鍙戦佺紦瀛樺尯絀洪棿鐨勬渶澶у瓧鑺傛暟(璇ュ間細琚玭et.core.wmem_max瑕嗙洊).
鏍規(guī)嵁瀹為檯嫻嬭瘯,
濡傛灉鎵嬪伐鏇存敼浜唍et.ipv4.tcp_wmem鐨勫?鍒欎細鎸夋洿鏀圭殑鍊兼潵榪愯,鍚﹀垯鍦ㄩ粯璁ゆ儏鍐典笅,鍗忚鏍堥氬父鏄寜
net.core.wmem_default鍜宯et.core.wmem_max鐨勫兼潵鍒嗛厤鍐呭瓨鐨?
搴旂敤紼嬪簭搴旇鏍規(guī)嵁搴旂敤鐨勭壒鎬у湪紼嬪簭涓洿鏀瑰彂閫佺紦瀛樺ぇ灝?
socklen_t len = sizeof(sendbuflen);
getsockopt(clientSocket,
SOL_SOCKET, SO_SNDBUF, (void*)&sendbuflen, &len);
printf("default,sendbuf:%d\n",
sendbuflen);
sendbuflen
= 10240;
setsockopt(clientSocket, SOL_SOCKET,
SO_SNDBUF, (void*)&sendbuflen, len);
getsockopt(clientSocket,
SOL_SOCKET, SO_SNDBUF, (void*)&sendbuflen, &len);
printf("now,sendbuf:%d\n",
sendbuflen);
闇瑕佹敞鎰忕殑鏄?铏界劧灝嗗彂閫佺紦瀛樿緗?
鎴愪簡10k,浣嗗疄闄呬笂,鍗忚鏍堜細灝嗗叾鎵╁ぇ1鍊?璁句負20k.
-------------------瀹?
渚嬪垎鏋?---------------------
鍦?
瀹為檯搴旂敤涓?濡傛灉鍙戦佺鏄潪闃誨鍙戦?鐢變簬緗戠粶鐨勯樆濉炴垨鑰呮帴鏀剁澶勭悊榪囨參,閫氬父鍑虹幇鐨勬儏鍐墊槸,鍙戦佸簲鐢ㄧ▼搴忕湅璧鋒潵鍙戦佷簡10k鐨勬暟鎹?浣嗘槸鍙彂閫佷簡2k鍒?
瀵圭緙撳瓨涓?榪樻湁8k鍦ㄦ湰鏈虹紦瀛樹腑(鏈彂閫佹垨鑰呮湭寰楀埌鎺ユ敹绔殑紜).閭d箞姝ゆ椂,鎺ユ敹搴旂敤紼嬪簭鑳藉鏀跺埌鐨勬暟鎹負2k.鍋囧鎺ユ敹搴旂敤紼嬪簭璋冪敤recv鍑芥暟鑾?
鍙栦簡1k鐨勬暟鎹湪澶勭悊,鍦ㄨ繖涓灛闂?鍙戠敓浜嗕互涓嬫儏鍐典箣涓,鍙屾柟琛ㄧ幇涓?
A. 鍙戦佸簲鐢ㄧ▼搴忚涓簊end瀹屼簡10k鏁版嵁,鍏抽棴浜唖ocket:
鍙?
閫佷富鏈轟綔涓簍cp鐨勪富鍔ㄥ叧闂?榪炴帴灝嗗浜嶧IN_WAIT1鐨勫崐鍏抽棴鐘舵?絳夊緟瀵規(guī)柟鐨刟ck),騫朵笖,鍙戦佺紦瀛樹腑鐨?k鏁版嵁騫朵笉娓呴櫎,渚濈劧浼氬彂閫佺粰瀵?
绔?濡傛灉鎺ユ敹搴旂敤紼嬪簭渚濈劧鍦╮ecv,閭d箞瀹冧細鏀跺埌浣欎笅鐨?k鏁版嵁(榪欎釜鍓嶉鏄?鎺ユ敹绔細鍦ㄥ彂閫佺FIN_WAIT1鐘舵佽秴鏃跺墠鏀跺埌浣欎笅鐨?k鏁版嵁.),
鐒跺悗寰楀埌涓涓绔痵ocket琚叧闂殑娑堟伅(recv榪斿洖0).榪欐椂,搴旇榪涜鍏抽棴.
B. 鍙戦佸簲鐢ㄧ▼搴忓啀嬈¤皟鐢╯end鍙戦?k鐨勬暟鎹?
鍋?
濡傚彂閫佺紦瀛樼殑絀洪棿涓?0k,閭d箞鍙戦佺紦瀛樺彲鐢ㄧ┖闂翠負20-8=12k,澶т簬璇鋒眰鍙戦佺殑8k,鎵浠end鍑芥暟灝嗘暟鎹仛鎷瘋礉鍚?騫剁珛鍗寵繑鍥?192;
鍋?
濡傚彂
閫佺紦瀛樼殑絀洪棿涓?2k,閭d箞姝ゆ椂鍙戦佺紦瀛樺彲鐢ㄧ┖闂磋繕鏈?2-8=4k,send()浼氳繑鍥?096,搴旂敤紼嬪簭鍙戠幇榪斿洖鐨勫煎皬浜庤姹傚彂閫佺殑澶у皬鍊煎悗,鍙互璁?
涓虹紦瀛樺尯宸叉弧,榪欐椂蹇呴』闃誨(鎴栭氳繃select絳夊緟涓嬩竴嬈ocket鍙啓鐨勪俊鍙?,濡傛灉搴旂敤紼嬪簭涓嶇悊浼?绔嬪嵆鍐嶆璋冪敤send,閭d箞浼氬緱鍒?1鐨勫?
鍦╨inux涓嬭〃鐜頒負errno=EAGAIN.
C. 鎺ユ敹搴旂敤紼嬪簭鍦ㄥ鐞嗗畬1k鏁版嵁鍚?鍏抽棴浜唖ocket:
鎺?
鏀朵富鏈轟綔涓轟富鍔ㄥ叧闂?榪炴帴灝嗗浜嶧IN_WAIT1鐨勫崐鍏抽棴鐘舵?絳夊緟瀵規(guī)柟鐨刟ck).鐒跺悗,鍙戦佸簲鐢ㄧ▼搴忎細鏀跺埌socket鍙鐨勪俊鍙?閫氬父鏄?
select璋冪敤榪斿洖socket鍙),浣嗗湪璇誨彇鏃朵細鍙戠幇recv鍑芥暟榪斿洖0,榪欐椂搴旇璋冪敤close鍑芥暟鏉ュ叧闂璼ocket(鍙戦佺粰瀵規(guī)柟ack);
濡?
鏋滃彂閫佸簲鐢ㄧ▼搴忔病鏈夊鐞嗚繖涓彲璇葷殑淇″彿,鑰屾槸鍦╯end,閭d箞榪欒鍒嗕袱縐嶆儏鍐墊潵鑰冭檻,鍋囧鏄湪鍙戦佺鏀跺埌RST鏍囧織涔嬪悗璋冪敤send,send灝嗚繑鍥?
-1,鍚屾椂errno璁句負ECONNRESET琛ㄧず瀵圭緗戠粶宸叉柇寮,浣嗘槸,涔熸湁璇存硶鏄繘紼嬩細鏀跺埌SIGPIPE淇″彿,
璇ヤ俊鍙風(fēng)殑榛樿鍝嶅簲鍔ㄤ綔鏄鍑鴻繘紼?濡傛灉蹇界暐璇ヤ俊鍙?閭d箞send鏄繑鍥?1,errno涓篍PIPE(鏈瘉瀹?;濡傛灉鏄湪鍙戦佺鏀跺埌RST鏍囧織涔嬪墠,鍒檚end鍍忓線甯鎬竴鏍峰伐浣?
浠ヤ笂璇寸殑鏄潪闃誨鐨?
send鎯呭喌,鍋囧send鏄樆濉炶皟鐢?騫朵笖姝eソ澶勪簬闃誨鏃?渚嬪涓嬈℃у彂閫佷竴涓法澶х殑buf,瓚呭嚭浜嗗彂閫佺紦瀛?,瀵圭socket鍏抽棴,閭d箞send灝?
榪斿洖鎴愬姛鍙戦佺殑瀛楄妭鏁?濡傛灉鍐嶆璋冪敤send,閭d箞浼氬悓涓婁竴鏍?
D. 浜ゆ崲鏈烘垨璺敱鍣ㄧ殑緗戠粶鏂紑:
鎺ユ敹搴旂敤紼嬪簭鍦ㄥ鐞嗗畬宸叉敹鍒扮殑1k鏁版嵁鍚?浼氱戶緇粠緙撳瓨鍖鴻
鍙栦綑涓嬬殑1k鏁版嵁,鐒跺悗灝辮〃鐜頒負鏃犳暟鎹彲璇葷殑鐜拌薄,榪欑鎯呭喌闇瑕佸簲鐢ㄧ▼搴忔潵澶勭悊瓚呮椂.涓鑸仛娉曟槸璁懼畾涓涓猻elect絳夊緟鐨勬渶澶ф椂闂?濡傛灉瓚呭嚭榪欎釜鏃墮棿渚?
鐒舵病鏈夋暟鎹彲璇?鍒欒涓簊ocket宸蹭笉鍙敤.
鍙?
閫佸簲鐢ㄧ▼搴忎細涓嶆柇鐨勫皢浣欎笅鐨勬暟鎹彂閫佸埌緗戠粶涓?浣嗗緇堝緱涓嶅埌紜,鎵浠ョ紦瀛樺尯鐨勫彲鐢ㄧ┖闂存寔緇負0,榪欑鎯呭喌涔熼渶瑕佸簲鐢ㄧ▼搴忔潵澶勭悊.
濡傛灉涓嶇敱搴旂敤紼嬪簭鏉ュ鐞嗚繖縐嶆儏鍐佃秴鏃剁殑鎯呭喌,涔熷彲浠ラ氳繃tcp鍗忚鏈韓鏉ュ鐞?鍏蜂綋鍙互鏌?
鐪媠ysctl欏逛腑鐨?
net.ipv4.tcp_keepalive_intvl
net.ipv4.tcp_keepalive_probes
net.ipv4.tcp_keepalive_time
鍘熸枃鍦板潃 http://xufish.blogbus.com/logs/40537344.html
澶у閮藉緢鐔熸?zhèn)塇TTP鍗忚鐨勫簲鐢紝鍥犱負姣忓ぉ閮藉湪緗戠粶涓婃祻瑙堢潃涓嶅皯涓滆タ錛屼篃閮界煡閬撴槸HTTP鍗忚鏄浉褰撶畝鍗曠殑銆傛瘡嬈$敤
thunder涔嬬被鐨勪笅杞借蔣浠朵笅杞界綉欏碉紝褰撶敤鍒伴偅涓?#8220;鐢╰hunder涓嬭澆鍏ㄩ儴閾炬帴”鏃舵昏寰楀緢紲炲銆?br>
鍚庢潵鎯蟲兂錛屽叾瀹炶瀹炵幇榪欎簺涓嬭澆鍔熻兘涔熷茍涓嶉毦錛屽彧瑕佹寜鐓TTP鍗忚鍙戦乺equest錛岀劧鍚庡鎺ユ敹鍒扮殑鏁版嵁榪涜鍒嗘瀽錛屽鏋滈〉闈笂榪樻湁href涔嬬被鐨勯摼鎺ユ寚
鍚戞爣蹇楀氨鍙互榪涜娣變竴灞傜殑涓嬭澆浜嗐侶TTP鍗忚鐩墠鐢ㄧ殑鏈澶氱殑鏄?.1
鐗堟湰錛岃鍏ㄩ潰閫忓交鍦版悶鎳傚畠?yōu)鍙傝僐FC2616鏂囨。鍚с傛垜鏄時fc鏂囨。浜嗙殑,瑕佺湅鑷繁鍘葷湅鍚_^
婧愪唬鐮佸涓嬶細
/******* http瀹㈡埛绔▼搴?httpclient.c
************/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <errno.h>
#include <unistd.h>
#include <netinet/in.h>
#include <limits.h>
#include <netdb.h>
#include <arpa/inet.h>
#include <ctype.h>
//////////////////////////////httpclient.c
寮濮?//////////////////////////////////////////
/********************************************
鍔熻兘錛氭悳绱㈠瓧絎︿覆鍙寵竟璧風(fēng)殑絎竴涓尮閰嶅瓧絎?br>
********************************************/
char * Rstrchr(char * s, char x) {
int i = strlen(s);
if(!(*s)) return 0;
while(s[i-1]) if(strchr(s + (i - 1), x)) return (s + (i - 1)); else i--;
return 0;
}
/********************************************
鍔熻兘錛氭妸瀛楃涓茶漿鎹負鍏ㄥ皬鍐?br>
********************************************/
void ToLowerCase(char * s) {
while(s && *s) {*s=tolower(*s);s++;}
}
/**************************************************************
鍔熻兘錛氫粠瀛楃涓瞫rc涓垎鏋愬嚭緗戠珯鍦板潃鍜岀鍙o紝騫跺緱鍒扮敤鎴瘋涓嬭澆鐨勬枃浠?br>
***************************************************************/
void GetHost(char * src, char * web, char * file, int * port) {
char * pA;
char * pB;
memset(web,
0, sizeof(web));
memset(file, 0, sizeof(file));
*port = 0;
if(!(*src)) return;
pA = src;
if(!strncmp(pA, "http://", strlen("http://"))) pA = src+strlen("http://");
else if(!strncmp(pA, "https://", strlen("https://"))) pA = src+strlen("https://");
pB = strchr(pA, '/');
if(pB)
{
memcpy(web, pA, strlen(pA) - strlen(pB));
if(pB+1) {
memcpy(file, pB + 1, strlen(pB) - 1);
file[strlen(pB) - 1] = 0;
}
}
else memcpy(web, pA, strlen(pA));
if(pB)
web[strlen(pA) - strlen(pB)] = 0;
else web[strlen(pA)] = 0;
pA = strchr(web, ':');
if(pA)
*port = atoi(pA + 1);
else *port =
80;
}
int main(int
argc, char *argv[])
{
int sockfd;
char buffer[1024];
struct sockaddr_in server_addr;
struct hostent *host;
int portnumber,nbytes;
char host_addr[256];
char host_file[1024];
char local_file[256];
FILE * fp;
char request[1024];
int send,
totalsend;
int i;
char * pt;
if(argc!=2)
{
fprintf(stderr,"Usage:%s web-address\a\n",argv[0]);
exit(1);
}
printf("parameter.1
is: %s\n", argv[1]);
ToLowerCase(argv[1]);/*灝嗗弬鏁拌漿鎹負鍏ㄥ皬鍐?/
printf("lowercase
parameter.1 is: %s\n",
argv[1]);
GetHost(argv[1], host_addr, host_file, &portnumber);/*鍒嗘瀽緗戝潃銆佺鍙c佹枃浠跺悕絳?/
printf("webhost:%s\n", host_addr);
printf("hostfile:%s\n", host_file);
printf("portnumber:%d\n\n", portnumber);
if((host=gethostbyname(host_addr))==NULL)/*鍙栧緱涓繪満IP鍦板潃*/
{
fprintf(stderr,"Gethostname error, %s\n", strerror(errno));
exit(1);
}
/* 瀹㈡埛紼嬪簭寮濮嬪緩绔?sockfd鎻忚堪絎?*/
if((sockfd=socket(AF_INET,SOCK_STREAM,0))==-1)/*寤虹珛SOCKET榪炴帴*/
{
fprintf(stderr,"Socket Error:%s\a\n",strerror(errno));
exit(1);
}
/* 瀹㈡埛紼嬪簭濉厖鏈嶅姟绔殑璧勬枡 */
bzero(&server_addr,sizeof(server_addr));
server_addr.sin_family=AF_INET;
server_addr.sin_port=htons(portnumber);
server_addr.sin_addr=*((struct in_addr
*)host->h_addr);
/* 瀹㈡埛紼嬪簭鍙戣搗榪炴帴璇鋒眰 */
if(connect(sockfd,(struct sockaddr *)(&server_addr),sizeof(struct sockaddr))==-1)/*榪炴帴緗戠珯*/
{
fprintf(stderr,"Connect Error:%s\a\n",strerror(errno));
exit(1);
}
sprintf(request,
"GET /%s HTTP/1.1\r\nAccept:
*/*\r\nAccept-Language: zh-cn\r\n\
User-Agent: Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)\r\n\
Host: %s:%d\r\nConnection: Close\r\n\r\n", host_file,
host_addr, portnumber);
printf("%s", request);/*鍑嗗request錛屽皢瑕佸彂閫佺粰涓繪満*/
/*鍙栧緱鐪熷疄鐨勬枃浠跺悕*/
if(host_file && *host_file)
pt = Rstrchr(host_file, '/');
else pt = 0;
memset(local_file,
0, sizeof(local_file));
if(pt && *pt) {
if((pt
+ 1) && *(pt+1)) strcpy(local_file,
pt + 1);
else memcpy(local_file,
host_file, strlen(host_file)
- 1);
}
else if(host_file
&& *host_file) strcpy(local_file, host_file);
else strcpy(local_file, "index.html");
printf("local
filename to write:%s\n\n",
local_file);
/*鍙戦乭ttp璇鋒眰request*/
send = 0;totalsend
= 0;
nbytes=strlen(request);
while(totalsend <
nbytes) {
send = write(sockfd, request +
totalsend, nbytes - totalsend);
if(send==-1) {printf("send error!%s\n",
strerror(errno));exit(0);}
totalsend+=send;
printf("%d bytes send OK!\n",
totalsend);
}
fp = fopen(local_file, "a");
if(!fp) {
printf("create file error! %s\n", strerror(errno));
return 0;
}
printf("\nThe
following is the response header:\n");
i=0;
/* 榪炴帴鎴愬姛浜嗭紝鎺ユ敹http鍝嶅簲錛宺esponse */
while((nbytes=read(sockfd,buffer,1))==1)
{
if(i <
4) {
if(buffer[0] == '\r' || buffer[0] == '\n') i++;
else i = 0;
printf("%c", buffer[0]);/*鎶奾ttp澶翠俊鎭墦鍗板湪灞忓箷涓?/
}
else {
fwrite(buffer, 1, 1, fp);/*灝唄ttp涓諱綋淇℃伅鍐欏叆鏂囦歡*/
i++;
if(i%1024
== 0) fflush(fp);/*姣?K鏃跺瓨鐩樹竴嬈?/
}
}
fclose(fp);
/* 緇撴潫閫氳 */
close(sockfd);
exit(0);
}
zj@zj:~/C_pram/practice/http_client$
ls
httpclient httpclient.c
zj@zj:~/C_pram/practice/http_client$
./httpclient http://www.baidu.com/
parameter.1 is:
http://www.baidu.com/
lowercase parameter.1 is: http://www.baidu.com/
webhost:www.baidu.com
hostfile:
portnumber:80
GET
/ HTTP/1.1
Accept: */*
Accept-Language: zh-cn
User-Agent:
Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)
Host:
www.baidu.com:80
Connection: Close
local filename to
write:index.html
163 bytes send OK!
The following is the
response header:
HTTP/1.1 200 OK
Date: Wed, 29 Oct 2008 10:41:40
GMT
Server: BWS/1.0
Content-Length: 4216
Content-Type:
text/html
Cache-Control: private
Expires: Wed, 29 Oct 2008
10:41:40 GMT
Set-Cookie:
BAIDUID=A93059C8DDF7F1BC47C10CAF9779030E:FG=1; expires=Wed, 29-Oct-38
10:41:40 GMT; path=/; domain=.baidu.com
P3P: CP=" OTI DSP COR IVA OUR
IND COM "
zj@zj:~/C_pram/practice/http_client$ ls
httpclient
httpclient.c index.html
涓嶆寚瀹氭枃浠跺悕瀛楃殑璇?榛樿灝辨槸涓嬭澆緗戠珯榛樿鐨勯欏典簡^_^.
#include <stdio.h>
#include
<stdlib.h>
#include <string.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netdb.h>
#define HTTPPORT 80
char* head =
"GET /u2/76292/ HTTP/1.1\r\n"
"Accept: */*\r\n"
"Accept-Language: zh-cn\r\n"
"Accept-Encoding: gzip, deflate\r\n"
"User-Agent: Mozilla/4.0 (compatible;
MSIE 6.0; Windows NT 5.1; SV1; CIBA; TheWorld)\r\n"
"Host:blog.chinaunix.net\r\n"
"Connection: Keep-Alive\r\n\r\n";
int connect_URL(char *domain,int port)
{
int
sock;
struct hostent *
host;
struct sockaddr_in server;
host =
gethostbyname(domain);
if (host == NULL)
{
printf("gethostbyname error\n");
return
-2;
}
// printf("HostName:
%s\n",host->h_name);
// printf("IP Address: %s\n",inet_ntoa(*((struct in_addr
*)host->h_addr)));
sock = socket(AF_INET,SOCK_STREAM,0);
if (sock
< 0)
{
printf("invalid socket\n");
return -1;
}
memset(&server,0,sizeof(struct sockaddr_in));
memcpy(&server.sin_addr,host->h_addr_list[0],host->h_length);
server.sin_family = AF_INET;
server.sin_port = htons(port);
return (connect(sock,(struct sockaddr *)&server,sizeof(struct sockaddr)) <0) ? -1 : sock;
}
int main()
{
int
sock;
char buf[100];
char *domain
= "blog.chinaunix.net";
fp = fopen("test.txt","rb");
if(NULL == fp){
printf("can't open stockcode file!\n");
return
-1;
}
sock
= connect_URL(domain,HTTPPORT);
if (sock
<0){
printf("connetc err\n");
return
-1;
}
send(sock,head,strlen(head),0);
while(1)
{
if((recv(sock,buf,100,0))<1)
break;
fprintf(fp,"%s",bufp); //save http data
}
fclose(fp);
close(sock);
printf("bye!\n");
return 0;
}
鎴戣繖閲屾槸淇濆瓨鏁版嵁鍒版湰鍦扮‖鐩?鍙互鍦ㄨ繖涓殑鍩虹涓婁慨鏀?head澶寸殑瀹氫箟鍙互鑷繁浣跨敤wireshark鎶撳寘鏉ョ湅
涓涓猦ttp璇鋒眰鐨勮緇嗚繃紼?/font>
鎴戜滑鏉ョ湅褰撴垜浠湪嫻忚鍣ㄨ緭鍏?/font>http://www.mycompany.com:8080/mydir/index.html,騫曞悗鎵鍙戠敓鐨勪竴鍒囥?/font>
棣栧厛http鏄竴涓簲鐢ㄥ眰鐨勫崗璁紝鍦ㄨ繖涓眰鐨勫崗璁紝鍙槸涓縐嶉氳瑙勮寖錛屼篃灝辨槸鍥犱負鍙屾柟瑕佽繘琛岄氳錛屽ぇ瀹惰浜嬪厛綰﹀畾涓涓鑼冦?/font>
1.榪炴帴 褰撴垜浠緭鍏ヨ繖鏍蜂竴涓姹傛椂錛岄鍏堣寤虹珛涓涓猻ocket榪炴帴錛屽洜涓簊ocket鏄氳繃ip鍜岀鍙e緩绔嬬殑錛屾墍浠ヤ箣鍓嶈繕鏈変竴涓狣NS瑙f瀽榪囩▼錛屾妸www.mycompany.com鍙樻垚ip錛屽鏋渦rl閲屼笉鍖呭惈绔彛鍙鳳紝鍒欎細浣跨敤璇ュ崗璁殑榛樿绔彛鍙楓?/font>
DNS鐨勮繃紼嬫槸榪欐牱鐨勶細棣栧厛鎴戜滑鐭ラ亾鎴戜滑鏈湴鐨勬満鍣ㄤ笂鍦ㄩ厤緗綉緇滄椂閮戒細濉啓DNS錛岃繖鏍鋒湰鏈哄氨浼氭妸榪欎釜url鍙戠粰榪欎釜閰嶇疆鐨凞NS鏈嶅姟鍣紝濡傛灉鑳藉鎵懼埌鐩稿簲鐨剈rl鍒欒繑鍥炲叾ip錛屽惁鍒欒DNS灝嗙戶緇皢璇ヨВ鏋愯姹傚彂閫佺粰涓婄駭DNS錛屾暣涓狣NS鍙互鐪嬪仛鏄竴涓爲(wèi)鐘剁粨鏋勶紝璇ヨ姹傚皢涓鐩村彂閫佸埌鏍圭洿鍒板緱鍒扮粨鏋溿傜幇鍦ㄥ凡緇忔嫢鏈変簡鐩爣ip鍜岀鍙e彿錛岃繖鏍鋒垜浠氨鍙互鎵撳紑socket榪炴帴浜嗐?/font>
2.璇鋒眰 榪炴帴鎴愬姛寤虹珛鍚庯紝寮濮嬪悜web鏈嶅姟鍣ㄥ彂閫佽姹傦紝榪欎釜璇鋒眰涓鑸槸GET鎴朠OST鍛戒護錛圥OST鐢ㄤ簬FORM鍙傛暟鐨勪紶閫掞級銆侴ET鍛戒護鐨勬牸寮忎負錛氥銆GET 璺緞/鏂囦歡鍚?HTTP/1.0
鏂囦歡鍚嶆寚鍑烘墍璁塊棶鐨勬枃浠訛紝HTTP/1.0鎸囧嚭Web嫻忚鍣ㄤ嬌鐢ㄧ殑HTTP鐗堟湰銆傜幇鍦ㄥ彲浠ュ彂閫丟ET鍛戒護錛?/font>
GET /mydir/index.html HTTP/1.0錛?/font>
3.搴旂瓟 web鏈嶅姟鍣ㄦ敹鍒拌繖涓姹傦紝榪涜澶勭悊銆備粠瀹冪殑鏂囨。絀洪棿涓悳绱㈠瓙鐩綍mydir鐨勬枃浠秈ndex.html銆傚鏋滄壘鍒拌鏂囦歡錛學(xué)eb鏈嶅姟鍣ㄦ妸璇ユ枃浠跺唴瀹逛紶閫佺粰鐩稿簲鐨刉eb嫻忚鍣ㄣ?/font>
涓轟簡鍛婄煡嫻忚鍣紝錛學(xué)eb鏈嶅姟鍣ㄩ鍏堜紶閫佷竴浜汬TTP澶翠俊鎭紝鐒跺悗浼犻佸叿浣撳唴瀹癸紙鍗矵TTP浣撲俊鎭級錛孒TTP澶翠俊鎭拰HTTP浣撲俊鎭箣闂寸敤涓涓┖琛屽垎寮銆?br>甯哥敤鐨凥TTP澶翠俊鎭湁錛?br>銆銆鈶?HTTP 1.0 200 OK 銆榪欐槸Web鏈嶅姟鍣ㄥ簲絳旂殑絎竴琛岋紝鍒楀嚭鏈嶅姟鍣ㄦ鍦ㄨ繍琛岀殑HTTP鐗堟湰鍙峰拰搴旂瓟浠g爜銆備唬鐮?200 OK"琛ㄧず璇鋒眰瀹屾垚銆?br>銆銆鈶?MIME_Version:1.0銆瀹冩寚紺篗IME綾誨瀷鐨勭増鏈?br>銆銆鈶?content_type:綾誨瀷銆榪欎釜澶翠俊鎭潪甯擱噸瑕侊紝瀹冩寚紺篐TTP浣撲俊鎭殑MIME綾誨瀷銆傚錛歝ontent_type:text/html鎸囩ず浼犻佺殑鏁版嵁鏄疕TML鏂囨。銆?br>銆銆鈶?content_length:闀垮害鍊箋瀹冩寚紺篐TTP浣撲俊鎭殑闀垮害錛堝瓧鑺傦級銆?/font>
4.鍏抽棴榪炴帴錛氬綋搴旂瓟緇撴潫鍚庯紝W(xué)eb嫻忚鍣ㄤ笌Web鏈嶅姟鍣ㄥ繀欏繪柇寮錛屼互淇濊瘉鍏跺畠Web嫻忚鍣ㄨ兘澶熶笌Web鏈嶅姟鍣ㄥ緩绔嬭繛鎺ャ?/font>
涓嬮潰鎴戜滑鍏蜂綋鍒嗘瀽鍏朵腑鐨勬暟鎹寘鍦ㄧ綉緇滀腑婕父鐨勭粡鍘?/font>
鍦ㄧ綉緇滃垎灞傜粨鏋勪腑錛屽悇灞備箣闂存槸涓ユ牸鍗曞悜渚濊禆鐨勩?#8220;鏈嶅姟”鏄弿榪板悇灞備箣闂村叧緋葷殑鎶借薄姒傚康錛屽嵆緗戠粶涓悇灞傚悜绱ч偦涓婂眰鎻愪緵鐨勪竴緇勬搷浣溿備笅灞傛槸鏈嶅姟鎻愪緵鑰咃紝涓婂眰鏄姹傛湇鍔$殑鐢ㄦ埛銆傛湇鍔$殑琛ㄧ幇褰㈠紡鏄師璇紙primitive錛夛紝濡傜郴緇熻皟鐢ㄦ垨搴撳嚱鏁般傜郴緇熻皟鐢ㄦ槸鎿嶄綔緋葷粺鍐呮牳鍚戠綉緇滃簲鐢ㄧ▼搴忔垨楂樺眰鍗忚鎻愪緵鐨勬湇鍔″師璇傜綉緇滀腑鐨刵灞傛昏鍚憂+1灞傛彁渚涙瘮n-1灞傛洿瀹屽鐨勬湇鍔★紝鍚﹀垯n灞傚氨娌℃湁瀛樺湪鐨勪環(huán)鍊箋?
浼犺緭灞傚疄鐜扮殑鏄?#8220;绔埌绔?#8221;閫氫俊錛屽紩榪涚綉闂磋繘紼嬮氫俊姒傚康錛屽悓鏃朵篃瑕佽В鍐沖樊閿欐帶鍒訛紝嫻侀噺鎺у埗錛屾暟鎹帓搴忥紙鎶ユ枃鎺掑簭錛夛紝榪炴帴綆$悊絳夐棶棰橈紝涓烘鎻愪緵涓嶅悓鐨勬湇鍔℃柟寮忋傞氬父浼犺緭灞傜殑鏈嶅姟閫氳繃緋葷粺璋冪敤鐨勬柟寮忔彁渚涳紝浠ocket鐨勬柟寮忋傚浜庡鎴風(fēng)錛岃鎯沖緩绔嬩竴涓猻ocket榪炴帴錛岄渶瑕佽皟鐢ㄨ繖鏍蜂竴浜涘嚱鏁皊ocket() bind() connect(),鐒跺悗灝卞彲浠ラ氳繃send()榪涜鏁版嵁鍙戦併?/font>
鐜板湪鐪嬫暟鎹寘鍦ㄧ綉緇滀腑鐨勭┛琛岃繃紼嬶細
搴旂敤灞?/font>
棣栧厛鎴戜滑鍙互鐪嬪埌鍦ㄥ簲鐢ㄥ眰錛屾牴鎹綋鍓嶇殑闇姹傚拰鍔ㄤ綔錛岀粨鍚堝簲鐢ㄥ眰鐨勫崗璁紝鏈夋垜浠‘瀹氬彂閫佺殑鏁版嵁鍐呭錛屾垜浠妸榪欎簺鏁版嵁鏀懼埌涓涓紦鍐插尯鍐咃紝鐒跺悗褰㈡垚浜嗗簲鐢ㄥ眰鐨勬姤鏂?strong>da
浼犺緭灞?/font>
榪欎簺鏁版嵁閫氳繃浼犺緭灞傚彂閫侊紝姣斿tcp鍗忚銆傛墍浠ュ畠浠細琚佸埌浼犺緭灞傚鐞嗭紝鍦ㄨ繖閲屾姤鏂囨墦涓婁簡浼犺緭澶寸殑鍖呭ご錛屼富瑕佸寘鍚鍙e彿錛屼互鍙妕cp鐨勫悇縐嶅埗淇℃伅錛岃繖浜涗俊鎭槸鐩存帴寰楀埌鐨勶紝鍥犱負鎺ュ彛涓渶瑕佹寚瀹氱鍙c傝繖鏍峰氨緇勬垚浜唗cp鐨勬暟鎹紶閫佸崟浣?strong>segment銆倀cp鏄竴縐嶇鍒扮鐨勫崗璁紝鍒╃敤榪欎簺淇℃伅錛屾瘮濡倀cp棣栭儴涓殑搴忓彿紜搴忓彿錛屾牴鎹繖浜涙暟瀛楋紝鍙戦佺殑涓鏂逛笉鏂殑榪涜鍙戦佺瓑寰呯‘璁わ紝鍙戦佷竴涓暟鎹鍚庯紝浼氬紑鍚竴涓鏁板櫒錛屽彧鏈夊綋鏀跺埌紜鍚庢墠浼氬彂閫佷笅涓涓紝濡傛灉瓚呰繃璁℃暟鏃墮棿浠嶆湭鏀跺埌紜鍒欒繘琛岄噸鍙戯紝鍦ㄦ帴鍙楃濡傛灉鏀跺埌閿欒鏁版嵁錛屽垯灝嗗叾涓㈠純錛岃繖灝嗗鑷村彂閫佺瓚呮椂閲嶅彂銆傞氳繃tcp鍗忚錛屾帶鍒朵簡鏁版嵁鍖呯殑鍙戦佸簭鍒楃殑浜х敓錛屼笉鏂殑璋冩暣鍙戦佸簭鍒楋紝瀹炵幇嫻佹帶鍜屾暟鎹畬鏁淬?/font>
緗戠粶灞?/font>
鐒跺悗寰呭彂閫佺殑鏁版嵁孌甸佸埌緗戠粶灞傦紝鍦ㄧ綉緇滃眰琚墦鍖咃紝榪欐牱灝佽涓婁簡緗戠粶灞傜殑鍖呭ご錛屽寘澶村唴閮ㄥ惈鏈夋簮鍙婄洰鐨勭殑ip鍦板潃錛岃灞傛暟鎹彂閫佸崟浣嶈縐頒負packet銆傜綉緇滃眰寮濮嬭礋璐e皢榪欐牱鐨勬暟鎹寘鍦ㄧ綉緇滀笂浼犺緭錛屽浣曠┛榪囪礬鐢卞櫒錛屾渶緇堝埌杈劇洰鐨勫湴鍧銆傚湪榪欓噷錛屾牴鎹洰鐨刬p鍦板潃錛屽氨闇瑕佹煡鎵句笅涓璺寵礬鐢辯殑鍦板潃銆傞鍏堝湪鏈満錛岃鏌ユ壘鏈満鐨勮礬鐢辮〃錛屽湪windows涓婅繍琛宺oute print灝卞彲浠ョ湅鍒板綋鍓嶈礬鐢辮〃鍐呭錛屾湁濡備笅鍑犻」錛?br>Active Routes Default Route Persistent Route.
鏁翠釜鏌ユ壘榪囩▼鏄繖鏍風(fēng)殑:
(1)鏍規(guī)嵁鐩殑鍦板潃錛屽緱鍒扮洰鐨勭綉緇滃彿錛屽鏋滃鍦ㄥ悓涓涓唴緗戯紝鍒欏彲浠ョ洿鎺ュ彂閫併?br>(2)濡傛灉涓嶆槸錛屽垯鏌ヨ璺敱琛紝鎵懼埌涓涓礬鐢便?br>(3)濡傛灉鎵句笉鍒版槑紜殑璺敱錛屾鏃跺湪璺敱琛ㄤ腑榪樹細鏈夐粯璁ょ綉鍏籌紝涔熷彲縐頒負緙虹渷緗戝叧錛孖P鐢ㄧ己鐪佺殑緗戝叧鍦板潃灝嗕竴涓暟鎹紶閫佺粰涓嬩竴涓寚瀹氱殑璺敱鍣紝鎵浠ョ綉鍏充篃鍙兘鏄礬鐢卞櫒錛屼篃鍙兘鍙槸鍐呯綉鍚戠壒瀹氳礬鐢卞櫒浼犺緭鏁版嵁鐨勭綉鍏熾?br>(4)璺敱鍣ㄦ敹鍒版暟鎹悗錛屽畠鍐嶆涓鴻繙紼嬩富鏈烘垨緗戠粶鏌ヨ璺敱錛岃嫢榪樻湭鎵懼埌璺敱錛岃鏁版嵁鍖呭皢鍙戦佸埌璇ヨ礬鐢卞櫒鐨勭己鐪佺綉鍏沖湴鍧銆傝屾暟鎹寘涓寘鍚竴涓渶澶ц礬鐢辮煩鏁幫紝濡傛灉瓚呰繃榪欎釜璺蟲暟錛屽氨浼氫涪寮冩暟鎹寘錛岃繖鏍峰彲浠ラ槻姝㈡棤闄愪紶閫掋傝礬鐢卞櫒鏀跺埌鏁版嵁鍖呭悗錛屽彧浼氭煡鐪嬬綉緇滃眰鐨勫寘瑁規(guī)暟鎹紝鐩殑ip銆傛墍浠ヨ瀹冩槸宸ヤ綔鍦ㄧ綉緇滃眰錛屼紶杈撳眰鐨勬暟鎹瀹冩潵璇村垯鏄忔槑鐨勩?/font>
濡傛灉涓婇潰榪欎簺姝ラ閮芥病鏈夋垚鍔燂紝閭d箞璇ユ暟鎹姤灝變笉鑳借浼犻併傚鏋滀笉鑳戒紶閫佺殑鏁版嵁鎶ユ潵鑷湰鏈猴紝閭d箞涓鑸細鍚戠敓鎴愭暟鎹姤鐨勫簲鐢ㄧ▼搴忚繑鍥炰竴涓?#8220;涓繪満涓嶅彲杈?#8221;鎴?“緗戠粶涓嶅彲杈?#8221;鐨勯敊璇?/font>
浠indows涓嬩富鏈虹殑璺敱琛ㄤ負渚嬶紝鐪嬭礬鐢辯殑鏌ユ壘榪囩▼
======================================================================
Active Routes:
Network Destination Netmask Gateway Interface Metric
0.0.0.0 0.0.0.0 192.168.1.2 192.168.1.101 10
127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1
192.168.1.0 255.255.255.0 192.168.1.101 192.168.1.101 10
192.168.1.101 255.255.255.255 127.0.0.1 127.0.0.1 10
192.168.1.255 255.255.255.255 192.168.1.101 192.168.1.101 10
224.0.0.0 240.0.0.0 192.168.1.101 192.168.1.101 10
255.255.255.255 255.255.255.255 192.168.1.101 192.168.1.101 1
Default Gateway: 192.168.1.2
Network Destination 鐩殑緗戞
Netmask 瀛愮綉鎺╃爜
Gateway 涓嬩竴璺寵礬鐢卞櫒鍏ュ彛鐨刬p錛岃礬鐢卞櫒閫氳繃interface鍜実ateway瀹氫箟涓璋冨埌涓嬩竴涓礬鐢卞櫒鐨勯摼璺紝閫氬父鎯呭喌涓嬶紝interface鍜実ateway鏄悓涓緗戞鐨勩?br>Interface 鍒拌揪璇ョ洰鐨勫湴鐨勬湰璺敱鍣ㄧ殑鍑哄彛ip錛堝浜庢垜浠殑涓漢pc鏉ヨ錛岄氬父鐢辨満綆楁満A鐨勭綉鍗★紝鐢ㄨ緗戝崱鐨処P鍦板潃鏍囪瘑錛屽綋鐒朵竴涓猵c涔熷彲浠ユ湁澶氫釜緗戝崱錛夈?/font>
緗戝叧榪欎釜姒傚康錛屼富瑕佺敤浜庝笉鍚屽瓙緗戦棿鐨勪氦浜掞紝褰撲袱涓瓙緗戝唴涓繪満A,B瑕佽繘琛岄氳鏃訛紝棣栧厛A瑕佸皢鏁版嵁鍙戦佸埌瀹冪殑鏈湴緗戝叧錛岀劧鍚庣綉鍏沖啀灝嗘暟鎹彂閫佺粰B鎵鍦ㄧ殑緗戝叧錛岀劧鍚庣綉鍏沖啀鍙戦佺粰B銆?br>榛樿緗戝叧錛屽綋涓涓暟鎹寘鐨勭洰鐨勭綉孌典笉鍦ㄤ綘鐨勮礬鐢辮褰曚腑錛岄偅涔堬紝浣犵殑璺敱鍣ㄨ鎶婇偅涓暟鎹寘鍙戦佸埌鍝噷錛佺己鐪佽礬鐢辯殑緗戝叧鏄敱浣犵殑榪炴帴涓婄殑default gateway鍐沖畾鐨勶紝涔熷氨鏄垜浠氬父鍦ㄧ綉緇滆繛鎺ラ噷閰嶇疆鐨勯偅涓箋?/font>
閫氬父interface鍜実ateway澶勫湪涓涓瓙緗戝唴錛屽浜庤礬鐢卞櫒鏉ヨ錛屽洜涓哄彲鑳藉叿鏈変笉鍚岀殑interface,褰撴暟鎹寘鍒拌揪鏃訛紝鏍規(guī)嵁Network Destination瀵繪壘鍖歸厤鐨勬潯鐩紝濡傛灉鎵懼埌錛宨nterface鍒欐寚鏄庝簡搴斿綋浠庤璺敱鍣ㄧ殑閭d釜鎺ュ彛鍑哄幓錛実ateway鍒欎唬琛ㄤ簡閭d釜瀛愮綉鐨勭綉鍏沖湴鍧銆?/font>
絎竴鏉?nbsp; 0.0.0.0 0.0.0.0 192.168.1.2 192.168.1.101 10
0.0.0.0浠h〃浜嗙己鐪佽礬鐢便傝璺敱璁板綍鐨勬剰鎬濇槸錛氬綋鎴戞帴鏀跺埌涓涓暟鎹寘鐨勭洰鐨勭綉孌典笉鍦ㄦ垜鐨勮礬鐢辮褰曚腑錛屾垜浼氬皢璇ユ暟鎹寘閫氳繃192.168.1.101榪欎釜鎺ュ彛鍙戦佸埌192.168.1.2榪欎釜鍦板潃錛岃繖涓湴鍧鏄笅涓涓礬鐢卞櫒鐨勪竴涓帴鍙o紝榪欐牱榪欎釜鏁版嵁鍖呭氨鍙互浜や粯緇欎笅涓涓礬鐢卞櫒澶勭悊錛屼笌鎴戞棤鍏熾傝璺敱璁板綍鐨勭嚎璺川閲?10銆傚綋鏈夊涓潯鐩尮閰嶆椂錛屼細閫夋嫨鍏鋒湁杈冨皬Metric鍊肩殑閭d釜銆?/font>
絎笁鏉?nbsp; 192.168.1.0 255.255.255.0 192.168.1.101 192.168.1.101 10
鐩磋仈緗戞鐨勮礬鐢辮褰曪細褰撹礬鐢卞櫒鏀跺埌鍙戝線鐩磋仈緗戞鐨勬暟鎹寘鏃惰濡備綍澶勭悊錛岃繖縐嶆儏鍐碉紝璺敱璁板綍鐨刬nterface鍜実ateway鏄悓涓涓傚綋鎴戞帴鏀跺埌涓涓暟鎹寘鐨勭洰鐨勭綉孌墊槸192.168.1.0鏃訛紝鎴戜細灝嗚鏁版嵁鍖呴氳繃192.168.1.101榪欎釜鎺ュ彛鐩存帴鍙戦佸嚭鍘伙紝鍥犱負榪欎釜绔彛鐩存帴榪炴帴鐫192.168.1.0榪欎釜緗戞錛岃璺敱璁板綍鐨勭嚎璺川閲?10 錛堝洜interface鍜実ateway鏄悓涓涓紝琛ㄧず鏁版嵁鍖呯洿鎺ヤ紶閫佺粰鐩殑鍦板潃錛屼笉闇瑕佸啀杞粰璺敱鍣級銆?/font>
涓鑸氨鍒嗚繖涓ょ鎯呭喌錛岀洰鐨勫湴鍧涓庡綋鍓嶈礬鐢卞櫒鎺ュ彛鏄惁鍦ㄥ悓涓瀛愮綉銆傚鏋滄槸鍒欑洿鎺ュ彂閫侊紝涓嶉渶鍐嶈漿緇欒礬鐢卞櫒錛屽惁鍒欒繕闇瑕佽漿鍙戠粰涓嬩竴涓礬鐢卞櫒緇х畫榪涜澶勭悊銆?/font>
鏌ユ壘鍒頒笅涓璺砳p鍦板潃鍚庯紝榪橀渶瑕佺煡閬撳畠鐨刴ac鍦板潃錛岃繖涓湴鍧瑕佷綔涓洪摼璺眰鏁版嵁瑁呰繘閾捐礬灞傚ご閮ㄣ傝繖鏃墮渶瑕乤rp鍗忚錛屽叿浣撹繃紼嬫槸榪欐牱鐨勶紝鏌ユ壘arp緙撳啿錛寃indows涓嬭繍琛宎rp -a鍙互鏌ョ湅褰撳墠arp緙撳啿鍐呭銆傚鏋滈噷闈㈠惈鏈夊搴攊p鐨刴ac鍦板潃錛屽垯鐩存帴榪斿洖銆傚惁鍒欓渶瑕佸彂鐢焌rp璇鋒眰錛岃璇鋒眰鍖呭惈婧愮殑ip鍜宮ac鍦板潃錛岃繕鏈夌洰鐨勫湴鐨刬p鍦板潃錛屽湪緗戝唴榪涜騫挎挱錛屾墍鏈夌殑涓繪満浼氭鏌ヨ嚜宸辯殑ip涓庤璇鋒眰涓殑鐩殑ip鏄惁涓鏍鳳紝濡傛灉鍒氬ソ瀵瑰簲鍒欒繑鍥炶嚜宸辯殑mac鍦板潃錛屽悓鏃跺皢璇鋒眰鑰呯殑ip mac淇濆瓨銆傝繖鏍峰氨寰楀埌浜嗙洰鏍噄p鐨刴ac鍦板潃銆?/font>
閾捐礬灞?/font>
灝唌ac鍦板潃鍙婇摼璺眰鎺у埗淇℃伅鍔犲埌鏁版嵁鍖呴噷錛屽艦鎴?strong>Frame錛孎rame鍦ㄩ摼璺眰鍗忚涓嬶紝瀹屾垚浜嗙浉閭?cè)潥勮妭鐐归棿鐨勬暟鎹紶杈撳Q屽畬鎴愯繛鎺ュ緩绔嬶紝鎺у埗浼犺緭閫熷害錛屾暟鎹畬鏁淬?/font>
鐗╃悊灞?/font>
鐗╃悊綰胯礬鍒欏彧璐熻矗璇ユ暟鎹互bit涓哄崟浣嶄粠涓繪満浼犺緭鍒頒笅涓涓洰鐨勫湴銆?/font>
涓嬩竴涓洰鐨勫湴鎺ュ彈鍒版暟鎹悗錛屼粠鐗╃悊灞傚緱鍒版暟鎹劧鍚庣粡榪囬愬眰鐨勮В鍖?鍒?閾捐礬灞?鍒?緗戠粶灞傦紝鐒跺悗寮濮嬩笂榪扮殑澶勭悊錛屽湪緇忕綉緇滃眰 閾捐礬灞?鐗╃悊灞傚皢鏁版嵁灝佽濂界戶緇紶寰涓嬩竴涓湴鍧銆?/font>
鍦ㄤ笂闈㈢殑榪囩▼涓紝鍙互鐪嬪埌鏈変竴涓礬鐢辮〃鏌ヨ榪囩▼錛岃岃繖涓礬鐢辮〃鐨勫緩绔嬪垯渚濊禆浜庤礬鐢辯畻娉曘備篃灝辨槸璇磋礬鐢辯畻娉曞疄闄呬笂鍙槸鐢ㄦ潵璺敱鍣ㄤ箣闂存洿鏂扮淮鎶よ礬鐢辮〃錛岀湡姝g殑鏁版嵁浼犺緭榪囩▼騫朵笉鎵ц榪欎釜綆楁硶錛屽彧鏌ョ湅璺敱琛ㄣ傝繖涓蹇典篃寰堥噸瑕侊紝闇瑕佺悊瑙e父鐢ㄧ殑璺敱綆楁硶銆傝屾暣涓猼cp鍗忚姣旇緝澶嶆潅錛岃窡閾捐礬灞傜殑鍗忚鏈変簺鐩鎬技錛屽叾涓湁寰堥噸瑕佺殑涓浜涙満鍒舵垨鑰呮蹇甸渶瑕佽鐪熺悊瑙o紝姣斿緙栧彿涓庣‘璁わ紝嫻侀噺鎺у埗錛岄噸鍙戞満鍒訛紝鍙戦佹帴鍙楃獥鍙c?/font>
tcp/ip鍩烘湰妯″瀷鍙婃蹇?/font>
鐗╃悊灞?/font>
璁懼錛屼腑緇у櫒錛坮epeater錛?闆嗙嚎鍣紙hub錛夈傚浜庤繖涓灞傛潵璇達紝浠庝竴涓鍙f敹鍒版暟鎹紝浼氳漿鍙戝埌鎵鏈夌鍙c?/font>
閾捐礬灞?/font>
鍗忚錛歋DLC錛圫ynchronous Da
鍥犱負鏈変簡MAC鍦板潃琛紝鎵浠ユ墠鍏呭垎閬垮厤浜嗗啿紿侊紝鍥犱負浜ゆ崲鏈洪氳繃鐩殑MAC鍦板潃鐭ラ亾搴旇鎶婅繖涓暟鎹漿鍙戝埌鍝釜绔彛銆傝屼笉浼氬儚HUB涓鏍鳳紝浼氳漿鍙戝埌鎵鏈夋淮绔彛銆傛墍浠ワ紝浜ゆ崲鏈烘槸鍙互鍒掑垎鍐茬獊鍩熸淮銆?/font>
緗戠粶灞?/font>
鍥涗釜涓昏鐨勫崗璁?
緗戦檯鍗忚IP錛氳礋璐e湪涓繪満鍜岀綉緇滀箣闂村鍧鍜岃礬鐢辨暟鎹寘銆?nbsp;
鍦板潃瑙f瀽鍗忚ARP錛氳幏寰楀悓涓鐗╃悊緗戠粶涓殑紜歡涓繪満鍦板潃銆?nbsp;
緗戦檯鎺у埗娑堟伅鍗忚ICMP錛氬彂閫佹秷鎭紝騫舵姤鍛婃湁鍏蟲暟鎹寘鐨勪紶閫侀敊璇?nbsp;
浜掕仈緇勭鐞嗗崗璁甀GMP錛氳IP涓繪満鎷挎潵鍚戞湰鍦板璺箍鎾礬鐢卞櫒鎶ュ憡涓繪満緇勬垚鍛樸?/font>
璇ュ眰璁懼鏈変笁灞備氦鎹㈡満錛岃礬鐢卞櫒銆?/font>
浼犺緭灞?/font>
涓や釜閲嶈鍗忚 TCP 鍜?UDP 銆?/font>
绔彛姒傚康錛歍CP/UDP 浣跨敤 IP 鍦板潃鏍囪瘑緗戜笂涓繪満錛屼嬌鐢ㄧ鍙e彿鏉ユ爣璇嗗簲鐢ㄨ繘紼嬶紝鍗?TCP/UDP 鐢ㄤ富鏈?IP 鍦板潃鍜屼負搴旂敤榪涚▼鍒嗛厤鐨勭鍙e彿鏉ユ爣璇嗗簲鐢ㄨ繘紼嬨傜鍙e彿鏄?16 浣嶇殑鏃犵鍙鋒暣鏁幫紝 TCP 鐨勭鍙e彿鍜?UDP 鐨勭鍙e彿鏄袱涓嫭绔嬬殑搴忓垪銆傚敖綆$浉浜掔嫭绔嬶紝濡傛灉 TCP 鍜?UDP 鍚屾椂鎻愪緵鏌愮鐭ュ悕鏈嶅姟錛屼袱涓崗璁氬父閫夋嫨鐩稿悓鐨勭鍙e彿銆傝繖綰補鏄負浜嗕嬌鐢ㄦ柟渚匡紝鑰屼笉鏄崗璁湰韜殑瑕佹眰銆傚埄鐢ㄧ鍙e彿錛屼竴鍙頒富鏈轟笂澶氫釜榪涚▼鍙互鍚屾椂浣跨敤 TCP/UDP 鎻愪緵鐨勪紶杈撴湇鍔★紝騫朵笖榪欑閫氫俊鏄鍒扮鐨勶紝瀹冪殑鏁版嵁鐢?IP 浼犻掞紝浣嗕笌 IP 鏁版嵁鎶ョ殑浼犻掕礬寰勬棤鍏熾傜綉緇滈氫俊涓敤涓涓笁鍏冪粍鍙互鍦ㄥ叏灞鍞竴鏍囧織涓涓簲鐢ㄨ繘紼嬶細錛堝崗璁紝鏈湴鍦板潃錛屾湰鍦扮鍙e彿錛夈?/font>
涔熷氨鏄tcp鍜寀dp鍙互浣跨敤鐩稿悓鐨勭鍙c?/font>
鍙互鐪嬪埌閫氳繃(鍗忚,婧愮鍙o紝婧恑p錛岀洰鐨勭鍙o紝鐩殑ip)灝卞彲浠ョ敤鏉ュ畬鍏ㄦ爣璇嗕竴緇勭綉緇滆繛鎺ャ?/font>
搴旂敤灞?/font>
鍩轟簬tcp錛歍elnet FTP SMTP DNS HTTP
鍩轟簬udp錛歊IP NTP錛堢綉钀芥椂闂村崗璁級鍜孌NS 錛圖NS涔熶嬌鐢═CP錛塖NMP TFTP
鍙傝冩枃鐚細
璇繪噦鏈満璺敱琛?http://hi.baidu.com/thusness/blog/item/9c18e5bf33725f0818d81f52.html
Internet 浼犺緭灞傚崗璁?http://www.cic.tsinghua.edu.cn/jdx/book6/3.htm 璁$畻鏈虹綉緇?璋㈠笇浠?/font>