TCP段消失[英] TCP segments disappearing

本文是小编为大家收集整理的关于TCP段消失的处理方法,想解了TCP段消失的问题怎么解决?TCP段消失问题的解决办法?TCP段消失问题的解决方案?那么可以参考本文帮助大家快速定位并解决问题,译文如有不准确的地方,大家可以切到English参考源文内容。

问题描述

我遇到了一个谷歌搜索似乎无法解决的问题.为了简单起见,我有一个客户端编写的c#,而运行Linux用C.客户端则以100次循环调用(缓冲区).问题在于服务器仅接收十几个.如果我放着足够大的睡眠,那么一切都很好.缓冲区很小 - 约30B.我读到了关于Nagle的算法和ACK延迟的信息,但它没有回答我的问题.

          for(int i = 0; i < 100; i++)
          { 
            try
            {                  
                client.Send(oneBuffer, 0, oneBuffer.Length, SocketFlags.None)                    
            }
            catch (SocketException socE)
            {
                if ((socE.SocketErrorCode == SocketError.WouldBlock)
                  || (socE.SocketErrorCode == SocketError.NoBufferSpaceAvailable)
                  || (socE.SocketErrorCode == SocketError.IOPending))
                {
                   Console.WriteLine("Never happens :(");
                }
            }
            Thread.Sleep(100); //problem solver but why??

          }

看起来,发送缓冲区变得完整并拒绝数据,直到它在阻止模式和非块模式下再次变得空了.更好的是,我永远不会有任何例外!?我希望一些例外会提出,但什么都没有. :(有什么想法?thnx提前.

推荐答案

我很天真地认为TCP堆栈存在问题.它是我的服务器代码.在数据操作之间的某个地方,我在存储消息的缓冲区上使用了strncpy()函数.末尾包含\ 0的每个消息. Strncpy仅将第一个消息(第一个字符串)复制到缓冲区中,而不管给出的计数(缓冲区长度).这导致我以为我丢失了信息.

当我在客户端上使用send()调用之间的延迟时,消息不会被缓冲.因此,strncpy()在带有一条消息的缓冲区上工作,一切顺利.这种"现象"使我们认为发送呼叫的速度速度正在引起我的问​​题.

再次感谢您的帮助,您的评论让我感到奇怪. :)

其他推荐答案

TCP面向流.这意味着recv可以读取未偿还字节的总数(发送但尚未读取)之间的任何数量的字节. "消息"不存在.发送的缓冲区可以分开或合并.

无法从TCP获得消息行为.没有办法使recv至少读取n字节.消息语义是通过应用程序协议构建的.通常,通过使用固定大小的消息或长度前缀.您可以通过读取循环至少读取n个字节.

从您的代码中删除该假设.

其他推荐答案

我认为这个问题是由于 nagle算法:

Nagle算法旨在通过引起网络流量来减少网络流量 插座到缓冲小数据包,然后组合并将其发送给 在某些情况下一个包. TCP数据包由40个组成 标题的字节以及发送的数据.当少量数据 与TCP一起发送,由TCP标头产生的架空可以 成为网络流量的重要组成部分.在重载上 网络,该开销产生的拥塞可能会导致 丢失的数据报和重传,以及过多的传播 由拥塞造成的时间. Nagle算法抑制了发送 新的TCP段时,新的传出数据从用户到达时 该连接的先前传输数据仍未被确认.

调用client.send函数并不意味着将发送TCP段. 在您的情况下,由于缓冲区很小,因此纳格算法将它们重新分为较大的细分市场.检查服务器端的数十个缓冲区所收到的整个数据.

当您添加thread.sleep(100)时,您将在服务器端收到100个数据包,因为Nagle算法不会等待更长的时间以获取更多数据.

如果您的应用程序确实需要短期延迟,则可以明确禁用TCPCLIENT的NAGLE算法:设置本文地址:https://www.itbaoku.cn/post/2352615.html

问题描述

I run into a problem that googling seems can't solve. To keep it simple I have a client written in C# and a server running Linux written in C. Client is calling Send(buffer) in a loop 100 times. The problem is that server receives only a dozen of them. If I put a sleep, big enough, in a loop everything turns out fine. The buffer is small - about 30B. I read about Nagle's algorithm and ACK delay but it doesn't answer my problems.

          for(int i = 0; i < 100; i++)
          { 
            try
            {                  
                client.Send(oneBuffer, 0, oneBuffer.Length, SocketFlags.None)                    
            }
            catch (SocketException socE)
            {
                if ((socE.SocketErrorCode == SocketError.WouldBlock)
                  || (socE.SocketErrorCode == SocketError.NoBufferSpaceAvailable)
                  || (socE.SocketErrorCode == SocketError.IOPending))
                {
                   Console.WriteLine("Never happens :(");
                }
            }
            Thread.Sleep(100); //problem solver but why??

          }

It's look like send buffer gets full and rejects data until it gets empty again, in blocking mode and nonblocking mode. Even better, I never get any exception!? I would expect some of the exceptions to raise but nothing. :( Any ideas? Thnx in advance.

推荐答案

I was naive thinking there was a problem with TCP stack. It was with my server code. Somewhere in between the data manipulation I used strncpy() function on a buffer that stores messages. Every message contained \0 at the end. Strncpy copied only the first message (the first string) out of the buffer regardless the count that was given (buffer length). That resulted in me thinking I had lost messages.

When I used the delay between send() calls on client, messages didn't get buffered. So, strncpy() worked on a buffer with one message and everything went smoothly. That "phenomenon" led we into thinking that speed rate of send calls is causing my problems.

Again thanks on help, your comments made me wonder. :)

其他推荐答案

TCP is stream oriented. This means that recv can read any amount of bytes between one and the total number of bytes outstanding (sent but not yet read). "Messages" do not exist. Sent buffers can be split or merged.

There is no way to get message behavior from TCP. There is no way to make recv read at least N bytes. Message semantics are constructed by the application protocol. Often, by using fixed-size messages or a length prefix. You can read at least N bytes by doing a read loop.

Remove that assumption from your code.

其他推荐答案

I think this issue is due to the nagle algorithm :

The Nagle algorithm is designed to reduce network traffic by causing the socket to buffer small packets and then combine and send them in one packet under certain circumstances. A TCP packet consists of 40 bytes of header plus the data being sent. When small packets of data are sent with TCP, the overhead resulting from the TCP header can become a significant part of the network traffic. On heavily loaded networks, the congestion resulting from this overhead can result in lost datagrams and retransmissions, as well as excessive propagation time caused by congestion. The Nagle algorithm inhibits the sending of new TCP segments when new outgoing data arrives from the user if any previouslytransmitted data on the connection remains unacknowledged.

Calling client.Send function doesn't mean a TCP segment will be sent. In your case, as buffers are small, the naggle algorithm will regroup them into larger segments. Check on server side that the dozen of buffers received contains the whole data.

When you add a Thread.Sleep(100), you will receive 100 packets on server side because nagle algotithm won't wait longer for further data.

If you really need a short latency in your application, you can explicitly disable nagle algorithm for your TcpClient : set the NoDelay property to true. Add this line at the begening of your code :

client.NoDelay = true;
查看更多