Cascade Transport

High-Speed Data Transfer and Video Streaming Engine
with CRN High-Performance Plug-and-Play TCP

Most of the web applications these days use the TCP for communication, including HTTP, SMTP, FTP, etc. Theoretically, conventional TCP's throughput can be estimated by the TCP Throughput Calculator or Network Throughput Calculator. Thus the pure TCP tuning has its theoretical throughput limit. To really optimize network and unlock its performance, we need to do more fundamental work.

Following the previous research work of "One more bit is enough" by Xia, Subramanian, Stoica, and Kalyanaraman published by ACM SIGCOMM'05, we have developed CRN VCP, which achieves efficient and fair bandwidth utilization while minimizing packet loss in high bandwidth-delay product networks. CRN VCP is highly efficient in data transfer and video streaming over the Internet and wireless networks.

1. Difference between CRN VCP and conventional TCP optimizations

The simplest TCP optimization or optimizer would be just the TCP parameters tuning, e.g., increasing TCP auto-tuning buffer limits min, default, and max number of bytes to 32MB using

sysctl -w net.ipv4.tcp_wmem="4096 65536 33554432"

A better optimization would be the congestion algorithm, such as TCP Cubic (vs. TCP Reno). Their differences are mainly on different congestion control algorithms. Developing a new congestion control algorithm is usually done by computer networking researchers.

sysctl -w net.ipv4.tcp_congestion_control=cubic

A harder optimization would be partially rewriting TCP for a specific OS by tweaking its code. The optimization will be limited to the customized OS kernel, usually used internally by an organization, such as a CDN or SDWAN company.

An even harder optimization would be completely rewriting TCP and make it universally work for all the OS kernels and on every server and device. Since in practice OS kernels do not maintain stable internal APIs across different kernel versions, making the same code work across all kernel versions is a very challenging software engineering work considering backward compatibility. It is like reinventing TCP decades later.

Then, the last hard optimization is to make the whole work plug and play on a system without any change of the running kernel, without restarting the system, and without recompiling/restarting all the applications running on it.

Without any change of the TCP standard API, CRN VCP is a new implementation of the TCP, frees the potential of the decades-old TCP, and delivers the high performance desired by the Internet applications using conventional TCP. Its installation is almost instant with a single command line command just like the Linux insmod command and Its footprint inside the Linux kernel is only a few hundred KBs. Then The system immediately gets more robust data transfer and video streaming performance with zero downtime. To make it enterprise UX friendly, CRN VCP can always be turned on or off by using sysctl

# To enable CRN VCP
sysctl -w net.crn.vcp_enable=1

# To disable CRN VCP
sysctl -w net.crn.vcp_enable=0

2. Benchmarking

Practically speaking, TCP's throughput is often measured by benchmarking tools such as iperf. In order to evaluate TCP's throughput under various network scenarios, people also use a default program on Linux called tc to add packet loss and latency to network interfaces to emulate various network conditions. We provide simple instructions below on how to use iperf and tc to evaluate TCP's throughput and performance.

For the evaluation purpose, we need two computers (physical or virtual machines), one will be used as the iperf server, while the other as the iperf client. Suppose the iperf server ip address is

3. iperf Installation

On Ubuntu/Debian, run the command

# apt install iperf3

On CentOS/RHEL, run the command

# yum install iperf3

4. Run TCP Benchmarking Test

4.1 Run the two commands below on the iperf server:

$ tc qdisc add dev eth0 root handle 1:0 netem delay 100ms loss 1

$ iperf3 -s -p 10000

The first command line uses the default Linux tc program to emulate an outbound network connection with 100ms delay and 1% packet loss on the server side. Please change eth0 if the server is using a different network interface to communicate with the client. The second command line simply starts the iperf server.

4.2 Run the two commands below on the iperf client:

$ tc qdisc add dev eth0 root handle 1:0 netem delay 100ms loss 1

$ iperf3 -c -p 10000 -R -i 1 -t 30

Again, the first command line uses the default Linux tc program to emulate an outbound network connection with 100ms delay and 1% packet loss on the client side. Please change eth0 if the client is using a different network interface to communicate with the server. Also please change according to your iperf server's IP address. The second command simply starts an iperf TCP testing by requesting data from the iperf server.

Note: TCP's throughput is sensitive to packet losses on the outbound direction. So make sure of using the tc command on the data source/sending side if you would like to emulate various network conditions.

If you would just like to see how TCP behave and measure its throughput under certain network conditions, the above steps are sufficient. You can simply stop here. The steps below demonstrate how Cascade Transport, aka. the CRN VCP, resolves the problem of the conventional TCP you just observed.

5. Compare TCP with CRN VCP

5.1 Install CRN VCP on the iperf server

First, get your evaluation copy of CRN VCP (link) with the file name being crn-trial-centos-8-v405-d150.tar for example, then install the CRN VCP by running the command below. The installation usually takes about a few seconds.

tar xf crn*.tar; crn/bin/

5.2 Rerun the iperf test

You should be able to see a much higher throughput (CRN VCP is 80x better than TCP) reported on the screen as below.

6. CRN VCP Demo for File Transfer (6min)

7. CRN VCP Demo for Video Streaming (12min)