I was wondering if it would be more efficent to use assembler-based networking than C sockets. I'm sure you can cap off almost any bandwidth with c sockets, but could there still be a use? and if so, could someone give me an example? Much appreciated.
It would be the same.
First of all, although it depends on the OS, you'll gnerally never have direct access to the hardware, so you'll hvae to go through the OS's wrappers (winsock or Berkeley Sockets or whatever). And that's almost exactly what C does, it calls the OS's function.
Secondly, even if you could save a little time, the amount of time saved would be insignificant compared to the amount of time it takes for the information to travel over the wire to the remote computer and back.
I concur with iago; about the only thing you could that would grant any speed-up is that you could inline the OS transition. On most systems, the function your C code calls will end up invoking some instruction (such as sysenter on x86) to transfer control to the OS, which will carry out the actual copy and such. So, you could get a very minor performance gain if you inlined the body of the call, such that your code invoked sysenter directly. However, as iago said, the savings is insignificant and therefore not worth the effort, except as a learning exercise. :)
Awesome and thanks a bunch
WHAT IF - we implemented a pure assembler/C, no-filehandle, user-mode, raw socket TCP stack? Can you say, incredible scalability?
Quote from: $t0rm on August 11, 2004, 09:38 PM
WHAT IF - we implemented a pure assembler/C, no-filehandle, user-mode, raw socket TCP stack? Can you say, incredible scalability?
Raw sockets generally require some kind of file handle for the socket, so I don't think you will have any luck there.
In any case, what would being user mode or not using file handles have to do with scalability?
I mean, one file handle for the raw socket, and the individual TCP connections are NOT done with file handles. That way, there's no overhead to the OS in terms of file handles.
Quote from: $t0rm on August 11, 2004, 09:53 PM
I mean, one file handle for the raw socket, and the individual TCP connections are NOT done with file handles. That way, there's no overhead to the OS in terms of file handles.
I don't think that much of the overhead associated with most TCP stacks is directly related to file handles. What TCP stack are you talking about in particular, though, for comparison purposes?
Linux and Windows. FreeBSD cheats and doesn't count.
For those interested, I wrote a simple test app for the NT TCP stack (though it requires Windows XP or Windows Server 2003, for lack of ConnectEx on Windows 2000) - you can grab it here (http://www.valhallalegends.com/skywing/files/tcptest/tcptest.cpp).
Quote from: $t0rm on August 11, 2004, 09:53 PM
I mean, one file handle for the raw socket, and the individual TCP connections are NOT done with file handles. That way, there's no overhead to the OS in terms of file handles.
#1: The OS would RST your connections for you automagically.
#2: All the tcp connection handling would be done in user mode, incurring a performance penalty over doing it in the kernel.
#3: This might possibly scale to a larger number of active connections than the kernel supports, if you can write something more memory efficient than the kernel code. But with that number of connections, chances are you'll be limited by speed, not memory. You're supposed to actually
do something with those connections too?
Would not doing it in the kernel incur overhead? Wouldn't they both run at the same internal clock speed?
Quote from: $t0rm on August 12, 2004, 09:54 AM
Would not doing it in the kernel incur overhead? Wouldn't they both run at the same internal clock speed?
For one, you guarantee that at least one more context switch must occur if you do it in user mode than kernel mode.
<newbie>
What's a context switch?
</newbie>
Quote from: $t0rm on August 12, 2004, 05:50 PM
<newbie>
What's a context switch?
</newbie>
Given the context of the discussion, I believe the context of the running code must switch from user mode to kernel mode. ;) Granted, I'm not sure.
Quote from: MyndFyre on August 12, 2004, 08:26 PM
Quote from: $t0rm on August 12, 2004, 05:50 PM
<newbie>
What's a context switch?
</newbie>
Given the context of the discussion, I believe the context of the running code must switch from user mode to kernel mode. ;) Granted, I'm not sure.
Yes, that's correct. User mode is what most stuff runs in, and it's very restricted. Kernel mode is also called "supervisor mode" and has access to everything.
The relevant part here would be that you would not need to do things like switch the process context (e.g. reload the page tables and flush tlbs, and so on) to handle a TCP/IP message in kernel mode.
You might see something like this:
Network card -> driver ISR -> DPC -> [intermediate NDIS layers] -> tcpip.sys
If you were doing this in user mode, it would look more like:
Network card -> driver ISR -> DPC -> [intermediate NDIS layers] -> tcpip.sys (yes, for raw sockets) -> afd.sys -> queue a completion notification to waiting user thread -> (thread dispatcher, sometime in the future selects user thread to run, which can now handle incoming data on the raw socket).
Of course, then you also have to route TCP message to whoever was actually associated with that socket, which is probably going to be in a *different* process, so you have to wait for the dispatcher to then run the sleeping application thread before the TCP-utilizing application can receive it's data.
If you were handling TCP in kernel mode, you might be able to go to the right user mode process directly instead of adding a secondary process and thread that has to be woken to handle the TCP protocol itself.