Thursday, February 13, 2014

A Gaping Hole in the Great Firewall of the United States

Most residential providers in the US have stopped allowing the usage of web hosts or other servers on their local networks. However; a major flaw, both technical and legal (NOTE I AM NOT A LAWYER AND THIS IS NOT LEGAL ADVICE) has been found in their system -- the Teredo tunelling service for IPv6. If your provider allows you to connect to the Teredo IPv6 tunelling service it is possible to listen for inbound connections -- that's right, you can actually use Teredo to set up a hidden host and listen for inbound connections (although not hidden in the sense of anonymity, it can still be traced back to you), this allows you to CREATE YOUR OWN WEBSITES ON A RESIDENTIAL CONNECTION (to a certain extent).
However, if you've been reading my recent posts, you'll realize that although this may be quite a large hole, it may be easily patched; if not technologically, by legal changes made to ISP agreements allowing them to terminate service if you are found using Teredo to run a server. Currently no ISP that I know of has done this yet, but if you have experienced this particular problem with Teredo services, please feel free to leave a comment below. Also; please note that this solution is temporary, and will certainly not work forever. If you're interested in setting up an IPv6 router; I will soon have global masters available running Teredo which you can replicate to.

Wednesday, February 12, 2014

Creating a Free and Open Internet -- Sponsors (with IP/DNS addresses)

Global masters (traffic to these servers WILL be advertised in globalRoutes.db)
IDWNet Cloud Computing: elcnet.servehttp.com:3801, UDP/IP
My personal tablet: tablet.idwnetcloudcomputing.com:3200, UDP/IPv6


ASP.NET web hosts (nothing fancy, just require TCP/IP ports 80 and 443 to be open as well as Internet Information Services installed)
No entries

Known bugs

  1. Routing responses can currently be spoofed if a router was valid when a system sent signed packets through it at the time, but the route later becomes invalidated or compromised. This vulnerability does NOT affect integrity of data received, but may result in dropped or duplicate data packets being received.



GlobalGrid routing service -- Enabling free and open communications on the Internet

Few realize it, but today's Internet is very highly centralized around a few big players who control nearly everything. Be it Facebook, Google, Amazon, Microsoft, or Yahoo!. All these services have one thing in common: they provide the illusion of free and open communication. That's right, the illusion; not actual free and open communication. While it may currently be possible for an individual to post nearly anything on an already existing Internet forum, the individual's tool set for communicating is extremely limited. For example, as I'm writing this post using Blogger, I'm limited to the applications and services that Google provides me with. I can't extend the service with custom native code, deploy dynamic code to Google's servers, or turn this blogging experience into my own unique website. No, that is not possible. As I'm writing this, I understand that my content is hosted and managed by Google -- anything they decide to do with it is completely out of my control; it's no longer my own content, it's Google's content. The same would be true if I posted on Facebook, Yahoo!, Twitter, or any other similar service -- I have no direct connection to my readers and no way to guarantee that the correct content is being received by them in a manner controlled by me (the author) of the content. Using any of these services leaves Internet users completely at the mercy of these providers, there is no way around this.
Many times people have asked me to create "the next Facebook" for them or "the next YouTube", or even the "next Google". While I sincerely doubt that merely replicating existing services would fail as a business plan, I realized that -- even if I wanted to, I couldn't legally set any of these services up. Why? No residential Internet Service Provider or university in their right mind would let someone do something like that. It's not in their best interests to allow people to host their own content -- instead most Internet Service Providers prefer to simply act like television stations; sending you data all the time, and giving you the illusion that you can run your own station, but never letting you actually broadcast your own channel.
Many Internet Service Providers do this through a combination of blocking inbound TCP connections, and slow upload speeds. Slow upload speeds have been around for a while and prevent most individuals from providing service to their users at an acceptable speed, making it impossible for them to compete. Next, many providers are now blocking inbound connections making it impossible for someone wanting to get to your server to actually connect to your website. This is by far the most serious problem. Internet Service Providers are employing technology to actively prevent you from hosting content -- this makes it nearly impossible to innovate on the Internet, as it is impossible to run your own server and actually accept connections to it; in addition, even if you could circumvent your Internet Service Provider's requirements, you would still be unable to provide an acceptable level of service to your users due to the slow connection speed -- there's no way to increase that.
So; what is the solution to these problems?
Well, Tor has so far been the best way for getting around censorship and creating your own servers, but that doesn't eliminate the speed problem, and you still only get physical link to the server.
So, as a proposed solution, I have been working on Global Grid -- the Internet Protocol of the future! Global Grid is an entirely P2P solution, in which your computer generates a unique identifier prior to establishing any connections. This identifier is your replacement for an IP address -- it will stick with you when you change IP addresses, network interfaces; or even switch to a completely different Internet Service Provider! Not only that, but it can listen for traffic on multiple interfaces simultaneously using the same identifier to maximize your physical access to bandwidth. However, this still isn't a good solution for most people -- because your typical residential user doesn't have access to multiple concurrent Internet connections. The server ends up once again being the bottleneck, and the infrastructure is still too centralized.
Solving the centralization problem
Solving the centralization problem is a tricky issue -- we've grown very used to centralized resources on the Internet and moving away from this centralization would require a complete paradigm shift to the way that we fundamentally access the Internet. To decentralize, a single resource would not have to be associated with just one website, one server, or one unique address; rather, it would have to be associated with multiple addresses -- one address for each user of the service. Each user would participate both in hosting the service, and consuming the service (similar to BitTorrent). For example, rather than going to Facebook to send messages and receive messages through Facebook with your friends, you would establish a DIRECT connection to each of your friends computers, and communicate without the necessity of any centralized resource. Each system would act as a client, server, and router.


BUT, what about ISPs that block inbound connections?
* Many ISPs, including my own now are blocking ALL inbound network connectivity. This will mean that the Internet, as a worldwide community will need to set up a series of servers outside these "firewalled" ISPs, which will have enough capacity to exchange routing information globally. Once one server comes online, it should be possible for that to dynamically evaluate other people's providers, and create more routing servers dynamically as needed.
The next step
So my next step will be to establish a service that is running outside of a Great Firewall. I plan to collaborate with some people (who?) in order to set up a few initial global routing servers. I will create a future post detailing the routing information for these servers, so that people can freely use them to exchange information in a P2P manner on the Internet.

What the service is NOT
* This service is NOT intended to be a cloaking service, anonymization proxy, or a means to access "regular" Internet websites. This system ONLY allows access to Global Grid services and CANNOT route traffic through normal Internet methods.
* This service is NOT a playground for people to launch any kind of denial of service (DoS) attacks on systems. Any such use of the systems in this manner will result in you being added to a global block list (coming soon) that people may optionally subscribe to.

Sunday, February 12, 2012

Writing high-performance code --- Managed languages aren't inherently "slow"

Historically, it was common to optimize your code at the lowest possible level to ensure that you were minimizing the number of CPU operations per second, and getting the most performance out of the CPU you were using. However, since the creation of managed languages, optimizing for specific CPUs has become less common, and less necessary. Despite the advent of these newer, platform-independent languages, many people still avoid using them in certain fields such as game design, 3D engine design, and physics simulation because they believe that these languages are "too slow" for the job. If you're writing for a real-time embedded device, a managed language MAY NOT be the best tool for the job, but for most consumer applications, managed languages are easy to deploy, easier to debug, and often more reliable than programs created in native languages such as ASM, C, and C++. In my experience with consumer games, and physics simulators, the main bottleneck is the communication between the GPU and the CPU, and even more often, the hard drive (SSDs are still too expensive for typical consumers). Using a native language rather than a managed language will not speed up disk access times, and will therefore not cause a performance increase in this area. Many managed VMs have built-in caching mechanisms, and optimized buffering mechanisms that will speed this up, if enabled (such as BufferedStream in .NET, and the ReadAhead feature in .NET 4.5/Windows 8). In addition to application performance, it is also typically faster to design code in a managed language than in a native one, and is easier to port to other platforms. Many native applications STILL are lacking 64-bit CPU support (particularly on Windows)! However, in a managed application, one can typically set their application to compile for "Any CPU" which allows their application to be compiled once and run on ARM chips, x86 chips, and x64 chips, at full-speed, after the code has been JITted.

Friday, December 16, 2011

IDWOS 2012 - Beginning the era of secure, distributed desktop computing

Soon we will be releasing our open-source version of IDWOS 2012 --- A distributed, high-performance, secure operating system which you can access anywhere. Run all your apps on your phone, your local computer, and your web browser, without the worries of a typical cloud computing infrastructure (with the exception of browser access --- modern web browsers inherently possess a number of security flaws which make them unfit for secure computing, please read this article for details on security issues with browsers).



Alternatives to direct browser access

As an alternative to using a web browser, you may download a secure connection program from our source repository, which will be published within the next few weeks. This software will be dual-licensed under the AGPL, and a proprietary license (we need to make a profit somehow). This secure connection utility will allow you to optionally synchronize your data with one of our freely available Cloud servers, after encrypting your data, to ensure that no one, including our employees is able to read your private information. This is in contrast to a number of other Cloud providers, such as Microsoft, Amazon, etc. which store your data in a form where their employees could access it if they wanted to, and read the secure information stored on their servers. As a cloud hosting company, we are concerned by these practices, and intend to modernize both our security infrastructure, and encourage other companies such as Microsoft and Amazon to do the same. Users of these services should always encrypt their information before sending it to any cloud service provider. In the future, cloud hosting providers should provide documentation about technological infrastructures in place which will prevent any employee from gaining access to sensitive customer information. The IDWOS aims to solve this problem, by giving our customers direct access to their information, and even the ability to store the data on their own computers, and access it over a P2P network infrastructure.

Our Cloud goes P2P

P2P distributed computing, and distributed storage is the future direction of the Cloud. We believe that each user of a cloud computing service should directly decide where they want their data to be stored. They can have the option of storing it only on their local computers, with the ability to access the data remotely, or they can synchronize it with our P2P cloud, and access the data anywhere. Either way; we will do our best job to keep our client's information secure. Transitioning the Cloud from the server, to P2P symbolizes the beginning of a less centralized Internet, a more democratic system of data storage, as well as increased security for our users.

What about the future of servers?

In the future, we see servers as being used as data access points, rather than data storage centers. Servers should be used solely for the purpose of accessing your data from any device, and facilitating a communication between a device which is not capable of running our Client application, and the distributed Cloud. Due to the fact that web browsers will still be used, there will not be a decreased demand for servers.

Sunday, November 20, 2011

A letter to my representatives

I am writing as your constituent in the 3rd Congressional district of Minnesota, and a copyright holder on a technology product, the Global Grid. I am writing as your constituent in the 3rd Congressional district of Minnesota. Before disregarding this letter, as another "spam message", I urge you to remember the promise you took when you started your campaign; your promise to uphold the United States constitution, and to represent the people of the United States, and the people of Minnesota. Your duty is to uphold the views of the people, not the views of a few CEOs in large corporations.

I oppose H.R.3261 - Stop Online Piracy Act, and believe that it would be a violation of the US constitution, first amendment on freedom of speech. The Internet is presently a very important vehicle in communicating freely in today's society. Many people are afraid of speaking openly in society about political matters for fear of unlawful persecution, physical harm, or other needs. The Internet provides a means that people can communicate (relatively) anonymously; without necessarily disclosing their physical appearance, location, identity to the person they are talking to. This is crucial in getting out political messages, and promoting free speech in the 21st century.

This law threatens the very existence of free speech in our country. It will allow large corporations to fight each-other with frivolous copyright infringement claims, completely shutting down each other's payment services, and online websites. I understand this bill is intended to support the integrity of Intellectual Property, and increase innovation in the United States. In practice; however, it will do the exact opposite. This law will allow companies to DIRECTLY TAKE DOWN COMPETING WEBSITES WITHOUT DUE PROCESS OF LAW. PRIVATE companies themselves will be able to ACT AS JUDGES in these matters. In short; you are giving private companies, motivated SOLELY BY PROFIT; complete control over the Internet in the United States.

Sincerely,
Brian Bosak

Thursday, October 20, 2011

On shared hosting - A lightweight, secure, virtualization environment

IDWOS 2012 is a virtual OS which features application-layer virtualization for untrusted processes running on the same server. Similar to Singularity, it only supports managed code. Native code always has the potential to exploit a secure system (even on a virtual processor) by calling into code outside the virtualized environment through a security hole. IDWOS 2012 is written entirely in C#, and utilizes .NET remoting to perform communication between an application running on the server and the host system. The system features a virtual file system (which is isolated per-process), memory isolation, and the ability to quickly halt an unwanted application via a remote administration API.
Below is a diagram of the system architecture:
In this diagram, all processes running inside the "virtual kernel" are completely isolated from each other, and assigned separate "security tokens" via the Host Operating system, and the token redirection layer.
When a user requests a web page, events happen in the following order:
  1. Kernel looks at the URL of the web page, and determines whether or not an existing application is loaded in RAM to process the request. If no application is loaded for the URL, the system will attempt to look in its application database to find the handler application associated with the URL.
  2. If it does not find a handler, it returns an error page to the client. If a handler is found, it loads the assembly into the virtual kernel, and sends the security token to the virtual application as a WExecutionContext object.
  3. Once the virtual process is created, control is passed back to native thread running in trusted memory. At this stage, the virtual process is allowed to execute any function within the virtual environment, but is NEVER allowed to call into any native code on host operating system.
  4. The native thread will then jump back to step 2, and notify the remote process running in the virtual kernel that a request has been received from a client, and passes in a ClientWebRequest object, which contains a Stream to read/write data to/from the client, and information about the request headers that were received from the client.
  5. The virtual process is free to process the request, and close the stream when it's done. The application will keep running until the server requests the application to terminate. The application will NOT be notified prior to the firing of this event, so it should expect to lose its state at any time. The application may also submit a request to the virtual runtime to kill its process (useful for handling errors and stuff)

On an unrelated note, check out Kuder Productions here