Thursday, 12 January 2012

Research Paper Topics & Ideas

Research Paper Topics & Ideas

Welcome to Journal of Theoretical and Applied Information Technology. In this listing, we intend to describe research methodologies to help write a quality research article and assist in finding a research topic.
As we all know that "Research" is the process of collecting information and data about a topic being studied. It  is a systematic process of inquiry in order to discover, interpret or revise facts, events, behaviors, or theories, or to make practical applications with the help of such facts. It is a continues process and doest not mean that it always is successful. In the past, one has bear witness to some heavily funded projects going down the drain and achieving so less after so much promise.
Choosing  an interesting and worthy research topic is always a time consuming process for a research group. Choosing  a topic, searching relevant material and citing sources is always challenging and sometimes painful. We cannot neglect the importance of search engines in this regard. Search Engines like Google and MSN will always be your true friends. Also we advise you to
  1. Read current newspapers and magazines related to information technology.
  2. Just going blindly for a research topic without proper homework means nothing but wastage of time. So post your queries about your intended research field in technology related Forums and Webs. Also search for your topic in websites like "Research Topics" , Ideas for Term Papers, and Reports.
  3. Always choose a topic that can be treated persuasively, is related to your domain of expertise, you have enough knowledge and resource for this  and can be developed adequately within the timeframe.
  4. If you are using some one else's idea, always refer it. Plagiarism does not mean that you cannot quote or pass one's words. It is actually a careless or intentional effort to take credit of  someone's work.

Research Areas

  • Algorithms
  • Artificial Intelligence
  •  Bio-Computation
  • Database & Information Systems
  •  Distributed Systems/Ubiquitous Computing
  • Geometric Computation
  • Graphics
  • Hardware/Architecture
  • Human Computer Interaction
  • Internet Systems & Infrastructure
  • Knowledge Representation & Reasoning
  • Machine Learning
  • Math Theory of Computation
  • Natural Language & Speech
  • Networks
  • Probabilistic Methods & Game Theoretic Methods
  • Programming Languages & Compilers
  • Robotics, Vision & Physical Modeling
  • Scientific Computing
  • Security and Privacy
  •  Software/Operating Systems
  • Systems Reliability/Dependability
Possible Research Topics and Areas

Following is a set of hot topics in the field of Theoretical and Applied Information Technology on which active research is being conducted by Institutes and Research Organizations across the Globe. Since they are on the public domain, everyone is free to take advantage of them but one should be careful that they do not collide in their research aims and names in the near future so it is recommended that you should chose the topic after careful consideration and modify your research aims accordingly. 
  • Security and Cryptography on WWW
  • Managing & analyzing large volumes of dynamic & diverse data
  • Privacy and Databases
  • A System for Integrated Management of Data, Accuracy, and Lineage
  • Agile Engineering Methods for Distributed Dependable Systems
  • Modeling Complex Systems
  • Design Patterns for Distributed Dependable Control Systems
  • Agent Oriented Software Engineering
  • Design and Analysis Methods for Multi-Agent Systems
  • Software Engineering Methods and Tools for Soft Computing
  • E-commerce challenges and solutions
  • Automated E-commerce negotiation agents
  • Database management system for XML
  • Tradeoffs in Replication Precision and Performance
  • Trusted Image Dissemination
  • Integrating database queries & Web searches
  • Compiling High-level Access Interfaces for Multi-site Software
  • Content-based Image Retrieval
  • Digital Library Technologies
  • Parallel Query Optimization
  • Large-scale Interoperation and Composition
  • Scalable Knowledge Composition
  • Privacy and Databases
  • High Performance Knowledge Bases
  • Computational Game Theory
  • Multi-Agent Learning
  • Digital Circuit Optimization
  • Transactional Coherence and Consistency
  • Visualizing Large VLSI Datasets
  • Global Optimization and Self-Calibration of CMOS Analog Circuits
  • Computational Law
  • General Game Playing
  • Logical Spreadsheets
  • Collaborative Commerce
  • Global Trading Catalog
  • Exploration of indigenous language dictionaries
  • Textual Inferences
  • Shallow Semantic Parsing
  • Unsupervised Language Learning
  • Question Answering with Statistics and Inference
  • Clustering Models
  • Statistical Machine Translation
  • Design of Ad-hoc Wireless Networks for Real-Time Media
  • Compression and Streaming
  • Optimized Video Streaming
  • Image and Signal Processor
  • Scalable Network Fabrics
  • High Speed Signaling
  • System-Level Design Tools and Hardware/Software Co-design
  • Web Password Hashing
  • Preventing online identity theft and phasing
  • Software Quality and Infrastructure Protection for Diffuse Computing
  • Agile Management of Dynamic Collaboration
  • Computational modeling of signal transduction pathways
  • Robotics
  • Artificial Intelligence
  • Validity Check
  • Electronic Voting
  • Verification of high-level designs
  • Statistics and Data Mining
  • Computer Ethics
  • Privacy, Right of Freedom Of Information
  • Standardizing E-Commerce Protocols
  • Software Metrics and Models
  • Software Configuration Management Patterns
  • Approximation Algorithms
  • Design of Network Topology
  • Software Development Technologies For Reactive, Real-Time, and Hybrid Systems
  • Modeling Flexible Protein Loops
  • Study of Protein Motion
  • Sensing of Deformable Objects
  • Adaptive Dynamic Collision Checking
  • Climbing Robots
  • Deformable Object Simulation
  • Robots on Rough Terrain
  • Textual Inferences
  • Machine Learning Control
  • Enterprise Software, Solutions, and Services
  • E-commerce and the World Wide Web
  • Future of Web Services
  • Electronic surveillance
  • Software Model for Game Programming
  • Extreme Programming
  • Agile Software Development
  • Reliable Component-Based Software Systems
  • Engineering and Technology Management
  • Application of Virtual Reality
  • Digital Convergence
  • Applications of Data warehousing and data mining
  • IP Telephony
  • Genetic Engineering
  • Security threats through Spy ware
  • Software Architecture Patterns
  • Object Oriented Design Patterns and Frameworks
  • Grid Computing
  • FPGA
  • Voice Technology
  • Controlling Pornography and Computer Crime over Internet
  • Internet and the Economic Revolution
  • Ad-Hoc Networks Modeling
  • Globalization and Computers
  • Computer Aided Design
  • Bioinformatics and Biometrix
  • Computer Technology and Government
  • Computer Crimes Cyberspace Social Aspects
  • Human Computer Interaction
  • Robust IP security
  • Mechanisms for Friendly Robotics
  • Multi-directional Motion Planning
  • Manipulation Planning
  • Surgical Simulation
  • Next-Generation Grids and Distributed Systems
  • Peer to Peer Computing
  • Distributed Data Management
  • Design and Manufacturing
  • Repositories of Reusable Knowledge
  • Randomized Motion Planning
  • Technology-Assisted Surgery
  • Human Motion Simulation
  • Human-Centered Machine Design
  • Simulation & Active Interfaces
  • Graphics System and Architecture
  • Interactive Workspaces
  • Computational Photography
  • Multi-graphics
  • Real-Time Programmable Shading
  • Rendering Algorithms
  • Simulation & Analysis of Muscle Models
  • Virtual Human Simulation
  • Compression of synthetic images
  • Creating digital archives of 3D artworks
  • 3D fax machine
  • Responsive Workbench
  • Spreadsheets for Images
  • Texture Analysis and Synthesis
  • Visualizing Complex Systems
  • Volume Rendering
  • Predicate Abstraction
  • Verification of transaction-based protocols
  • Reconfigurable Wireless Networks for Multimode Communications
  • Smart Photonic Networks
  • Improving Program Robustness via Static Analysis and Dynamic Instrumentation
  • High-Level Area and Timing Estimation
  • Hardware-Software Co-Synthesis
  • Collaborative Co-Located Information Organization
  • Enabling Rapid Off-the-Desktop Prototyping
  • Notebooks that Share and Walls that Remember
  • Interactive Workspaces
  • HAL
  • Logic Programming Techniques
  • Data Compression and Coding
  • Human Language Technology
  • Information Discovery
  • Machine Learning and Data Mining
  • Security and Cryptography
  • Spatial Data
  • XML and Semi-Structured Data
  • Supporting co-located, collaborative work with computationally-enhanced tables.
  • A collaborative work environment
  • Beyond the Desktop
  • Interaction with Large Displays
  • Moving Information and Control
  • Defense against Distributed Denial of Service Attacks
  • Extreme Scale Cluster Architecture
  • Feedback Based Inter-domain Routing
  • History-based Anti-spam
  • Towards Self-Managed Wireless LANs
  • Interactive Workspaces
  • Recovery Oriented Computing
  • A collaborative work environment
  • Only Software & Recursive Micro-reboots
  • Decoupled Storage
  • Space Systems
  • Inference Web
  • Web Semantics Technologies
  • AI-bots
  • Repositories of Reusable Knowledge
  • An Object-Oriented Modular Reasoning System
  • Model-Based Support of Distributed Collaborative Design
  • Modeling, Analysis and Control of Hybrid Systems
  • Technology for Enhanced Reuse of Design Objects
  • Virtual Network System
  • Active Queue Management
  • Scaleable Performance Prediction and Efficient Network Simulation
  • Sensor Networks
  • TCP Performance
  • Energy Efficient Wireless Communication
  • Load Balancing
  • Multimedia over Networks
  • Stochastic Network Theory
  • Web Cache Performance and Analysis
  • Optical Router
  • Optimal Routing in the Internet
  • Parallel Packet Switch
  • Rate Control Protocol for Short-lived Flows
  • Single Buffered Routers
  • TCP Switching
  • High Performance Switching
  • Link Adaptation in Wireless Local Area Networks
  • Mobility in Cellular and Wireless Local Area Networks
  • Performance Assessment and Traffic Differentiation in Wireless Local Area Networks
  • Identity Based Encryption
  • Authenticating Streamed Data
  • Identity Based Encryption Email system
  • Intrusion tolerance via threshold cryptography
  • Security of cryptographic primitives and protocols
  • Remote Exploration and Experimentation
  • Reliability Obtained by Adaptive Reconfiguration
  • Agent Applications and Ontologies
  • Agent-Oriented Software Engineering
  • Agent Programming and Specification Languages
  • Concept-Based Retrieval and Interpretation for Large Datasets
  • Complex and Adaptive Systems
  • Constraint Programming
  • Declarative Debugging

Tuesday, 10 January 2012

Conquering the Cloud: 6 Pitfalls Preventing Scalability and How to Avoid Them

We’re all using the same servers. We all have access to the same software. The resources available to each of us are plentiful. Yet, some of us are winning and some of us are losing. Why?
Scalability.
Success in today’s Cloud marketplace requires you to become a highly scalable organization with laser-like focus on efficiency and reproducibility.
Join us as we explore how you can avoid 6 common pitfalls to scalability. We’ll discuss how you can take advantage of scaling your team, scaling your infrastructure and scaling your revenue to take advantage of the opportunity to successfully expand with the Cloud.

Cloud Hosting We recommend Scibero Hosting

ScienceLogic CTO Antonio Piraino Predicts Web Host Industry Trends for 2012

ScienceLogic CTO, Antonio Piraino, in a 2010 interview with WHIR TV ScienceLogic CTO, Antonio Piraino, in a 2010 interview with WHIR TV
(WEB HOST INDUSTRY REVIEW) — With a new year comes the desire to make change, and when it comes to web hosting, former VP of research at Tier1 Research and CTO of monitoring firm ScienceLogic Antonio Piraino says that hosting providers will need to make some serious improvements to keep up with where the industry is headed in 2012.
In an interview with the WHIR, Piraino says that there are four key areas in which web hosts will start to really focus on in terms of cloud development going forward.
“The things we see most hosts starting to up the ante on are security number one, network and management number two, and orchestration and automation would be three and four,” Piraino says.
Piraino predicts that there will be one or two huge cloud infrastructure breaches in 2012 since most companies tend not to be extremely proactive about their security until they have a problem. At the same time, he encourages hosts to not shy away from talking up their security to customers.
“On the one hand I’d say there is going to be a big attack but on the flip side I keep trying to tell hosts that they need to not be afraid to go and talk up their security because the average company in the United States, or in Europe, or anywhere else in the world has less security than the average host. I think that some of them will start to do it, but that it’s something that all of them should start to talk about,” Piraino says.
Web hosts that provide cloud services will see a greater spend on cloud computing, and the price of cloud computing will go up as hosts start to layer more differentiators and services on top of their infrastructure as a service, according to Piraino.
“There’s going to be higher margins made of cloud computing as well next year. Whatever hosts are not trying to do something in that regard or thinking in that regard are lagging behind for sure,” he says.
Hosts should not be afraid to raise their prices, Piraino says, but they can’t do so without elevating their services.
“A lot of them just don’t know how to elevate their services and I think there is going to be a big separation between the winners and losers next year because businesses are getting a lot more savvy about their options. There’s a lot more noise being made by the top hosting and cloud providers,” he says. “If I was a hosting provider today I would spend a lot more money on marketing than I ever have in the past and I think that’s what we’re going to see in the new year. We’re going to see a lot of marketing campaigns with big budgets from the leaders in the space and new services.”
Piraino says that every web host thinks they can differentiate themselves on support but it’s not enough.
“Unfortunately that’s not a differentiator between each other. It is a differentiator towards Amazon perhaps, but even there you’re going to have to start elevating the kind of technology and services you offer,” he says.
Another trend Piraino foresees in 2012 is more service providers becoming service brokers in that they resell other technologies such as Amazon or third-party reporting systems.
Piraino says web hosts need to prepare for cloud 2.0 or hosting 3.0.
“[Hosting 3.0] is the ability to layer high-margin services on top of that underlying infrastructure and that’s where things like the more automated you are the better your margins are going to be because now you’re not having to hire more and more people who are expensive to do all these manual workflows,” he says.
By tying all the pieces together, Piraino says web hosts will save on cost and give them a differentiator and the ability to be more attractive to their customers. He says those web hosts who don’t implement automated systems are really going to start losing out.

Saturday, 7 January 2012

iPad rumor mill hits high gear, as do Apple's plans for China

The Apple rumor mill went into high gear this week, with reports of multiple iPads, giant televisions, and mystery events later in the month.
One tidbit of official news is that the company is planning to bring the iPhone 4S to China and 21 other countries next week. The move, which was expected, should give the 4S a healthy sales boost considering that China has become a top market for smartphones. In Apple's most recently reported quarterly earnings, Greater China came in second (behind North America) on the company's list of top revenue-generation regions.
You can read more about these stories and others--along with the usual dose of rumors--below.
Apple Talk Weekly rounds up of some of the top Apple-related news and rumors. It appears every Saturday morning and is curated by CNET's Apple reporter, Josh Lowensohn.

Iran squeezes Web surfers, prepares censored national intranet

The Iranian Cyber Police published new rules on Wednesday designed to allow officials to know exactly who is visiting what Web sites. Before they can log on, Iranians are required to provide their name, father's name, address, telephone number and national ID, according to an Iranian media report cited by Radio Free Europe. Cafe owners will be required to install security cameras and to keep all data on Web surfers, including browsing history, for six months.
The rules, which come as the country prepares for parliamentary elections in March, are a deterrent to activists who might want to use the Internet cafes to organize protests. Calls to boycott elections distributed via social networks or e-mail will be treated as national security crimes, the Iranian judiciary announced last week, according to a report today in the Wall Street Journal. Government officials claim they need to control access to the Internet to counter what they say is a "soft" cultural war being waged by Western countries to influence the morals of Iranians.
Monitoring Web surfers is an interim measure until the government is done building out its own domestic intranet that is "halal," or pure. Initially, the Iran intranet will run in tandem with the Internet before the global Web is shut off to the 23 million Internet users in Iran, according to reports. Payam Karbasi, spokesman for Iran professional union Corporate Computer Systems, told Iranian media that the domestic network, which was announced last March, would be launched in coming weeks, the WSJ reported.
Iranians have reported that during the intranet tests this week, Internet connections have slowed down and Web sites have been blocked. Access to VPNs (virtual private networks) Iranians use to access sites like Facebook, Twitter and YouTube have also been affected, reports said.
Widespread protests over purported fraud in the 2009 election, which brought President Mahmoud Ahmadinejad back to office, prompted the Iranian government to cut off access to opposition Web sites and mobile telephone networks. But protesters flocked to Twitter and Facebook to skirt the communications crackdown, to spread videos and news and to organize demonstrations. Tor and other tools were then used to get around government shutdowns of those sites.
Some of the extreme censorship measures adopted by Iran have also been used in Libya and in China, which deploys the "Great Firewall" to keep objectionable content out of the country. China also requires identification to use Internet cafes in Beijing, and has a history of shutting down blogs as well as allegedly meddling with Gmail and targeting activists with cyber attacks.

Thursday, 5 January 2012

Jobs Requiring Cloud Computing Skills Grow by 61 Percent

Candidate Supply of Sales Managers with Cloud Computing Skills and Experience

(WEB HOST INDUSTRY REVIEW) -- Marketplace talent resource WANTED Analytics announced on Thursday that according to The Hiring Scale, employers and staffing firms have posted more than 10,000 job ads that included requirements for cloud computing skills and experience in the past 90 days.
As part of the WANTED Analytics platform, the Hiring Scale measures conditions in local job markets by comparing hiring demand and labor supply.
According to the study, more than 2,400 companies posted job ads during this 90-day period and hiring demand grew 61 percent year-over-year.
Computer specialists and programmers are most commonly required to have cloud computing experience. But as cloud technology continues to impact other areas of business, additional fields are more commonly required to understand and work with cloud-based applications.
Other jobs that most often include these skills in job ads include marketing managers, sales managers, customer service representatives, and cargo and freight agents.
The study showed that the metropolitan area with the highest volume of job ads for cloud computing skills during the past 90 days was San Francisco, where recruiters in this area placed more than 1,000 unique job listings, representing a 95 percent year-over-year growth.
Other cities with high demand included Seattle, Washington, DC, New York, and San Jose, which was the only location to see a year-over-year decline in the volume of online job postings.
Recruiters in the San Jose area posted 12 percent fewer job ads than in the same 90-day period last year.
Recruiting conditions for cloud computing skills are likely to be moderately difficult with conditions varying based on the talent supply and hiring demand in each location.
According to the Hiring Scale, Recruiters sourcing for openings in Washington, DC are likely to experience one of the most difficult recruiting conditions in the United States.
It is likely that Recruiters in the Washington, DC metropolitan area will see a longer time-to-fill since job ads are posted online longer than the national average of 44 days.

Cloud Security Firm Gazzang Joins AWS Service Provider Program

An illustration on Gazzang's website breaks down its prices for 2012

(WEB HOST INDUSTRY REVIEW) -- Cloud security provider Gazzang has joined Amazon Web Services service provider partner program, according to an announcement made by the company on Thursday.
This announcement comes less than a month after Gazzang named Dustin Kirkland its chief architect.
Gazzang says this will provide Amazon customers with the ability to improve security through an encryption platform for data protection, access control and key management.
In 2011, Gazzang partnered with web hosting providers . It also added support for the CloudLinux platform to its ezNcrypt data security solution. Gazzang improves the stability of shared hosting and multi-tenant environments.
The AWS solution provider program will help Gazzang broaden its suite of offerings and expand its customer base while driving new revenue streams, according to the press release.
"Gazzang products have been built in the cloud for customers who require extreme scalability and rapid deployment of cloud services," Larry Warnock, CEO at Gazzang said in a statement. "Our on-demand, high-performance cloud security model puts the power of encryption, access control and key management services within reach for thousands of Amazon customers. We are excited to be working together with the AWS team and look forward to bringing a rich portfolio of new cloud data security services to market."
Gazzang says its ezNcrypt Flex Platform helps customers protect, encrypt and provide key management for open source databases like MySQL, PostgreSQL, MongoDB and Cassandra, and web servers like Apache, Nginx and Tomcat. ezNcrypt for Databases is a security application for LinuxOS that includes preconfigured rules, while ezNcrypt Flex allows custom rules.
The ezNcrypt platform installs in minutes and allows customers to create, control and administer their own keys - a feature usually exclusive to more expensive databases, according to the press release.

Samsung Galaxy Ace Plus announced

Samsung has officially unveiled a new Galaxy series phone dubbed as Samsung Galaxy Ace Plus. Samsung Galaxy Ace Plus is the enhanced version of Galaxy Ace smartphone. Featuring 1 GHz processor with 512 MB of RAM this phone runs on Android 2.3 OS.
Samsung Galaxy Ace Plus
Samsung Galaxy Ace Plus specification include- 3.65 inch HVGA display, 5 MP autofocus camera with LED flash, 3 GB on internal storage, 32 GB of expandable memory, HSDPA 7.2 Mbps, Wi-Fi, Bluetooth, USB and 3.5 mm audio jack.
Other features of the Samsung Galaxy Ace Plus are Social Hub, Music Hub, Samsung TouchWiz UI, Samsung ChatOn mobile communication service, and more.
Samsung Galaxy Ace Plus will be available in Russia starting this month, followed by Europe, CIS, Latin America, Southeast and Southwest Asia, the Middle East, Africa and China for 299 Euros ($388).

LG Optimus 2 is the latest addition to the LG Optimus series line-up.

LG Optimus 2 is the latest addition to the LG Optimus series line-up. Powered by Android 2.3 OS, the Optimus 2 is now available in US from C-Spire & Cellcom operators.
LG Optimus 2
LG Optimus 2 comes with 3.2-inch capacitive touch screen with resolution of 320 x 480 pixels, virtual Swype keypad, 800 MHz processor, 3.2-megapixel autofocus camera and camcorder, 179 MB internal memory, and up to 32GB expandable memory.
LG Optimus 2 also features HSDPA network, Wi-Fi, Bluetooth 3.0, GPS and 15oo mAh battery. This Android smartphone also sports 5 customizable home screens and app categories, to easily arrange and customize your favorites.
LG Optimus 2 is now available from C Spire for free with the two year contract and for $209.99 without contract. Cellcom is also selling LG Optimus 2 for $0.95 with a new two-year contract.
Related posts:
  1. LG Optimus 7Q (LG C900) Widow 7 smartphone launched, LG Optimus 7Q Price and Availability
  2. LG Optimus Pro touch and type phone launched
  3. LG Optimus One launches on Three UK, LG Optimus One Price
  4. LG Optimus Me P350 announced, LG Optimus Me P350 Specs & Price
  5. LG Optimus 3D officially launched in Europe

Mobile Phone Brain Cancer Link Rejected

Further research has been published suggesting there is no link between mobile phones and brain cancer.
The risk mobiles present has been much debated over the past 20 years as use of the phones has soared.
The latest study led by the Institute of Cancer Epidemiology in Denmark looked at more than 350,000 people with mobile phones over an 18-year period.
Researchers concluded users were at no greater risk than anyone else of developing brain cancer.
The findings, published on the British Medical Journal website, come after a series of studies have come to similar conclusions.

Consume Less Energy With Blade Servers

Consume Less Energy With Blade Servers

October 26th, 2011 In this extremely competitive market, well-informed business owners know that reducing power consumption is a smart and environmentally friendly way to cut operating expenses without reducing product quality or employee output. Switching IT processing to a Dell Blade system can save money while increasing productivity.

Improved Design

Traditional rack servers bundle components into individual cabinets along with separate energy consuming devices like graphics cards and keyboards. Each server requires a power source and extensive cabling and patching. A single optimized blade system can replace an entire room of rack servers and operate from a solitary source of power. The innovative design improves the effectiveness of internal fans and cooling systems, reducing current draw by as much as 65 percent for a fully loaded blade chassis over similarly configured rack systems. Since a single cabinet houses an entire block of servers, the need for external cooling systems or “cold rooms” drops as well.

The Text Message/SMS Turns 19 Years Old

The Text Message/SMS Turns 19 Years Old

December 3rd, 2011
First SMS Test Messaging
SMS Text Message Turns 19 Years Old
 the first sms text message was sent over the Vodafone GSM network in the United Kingdom on 3 December 1992, from a man named Neil Papworth using a personal computer to Richard Jarvis of Vodafone using an Orbitel 901 handset.
The text of the message was “Merry Christmas”.
The technology behind the SMS text is 27-years old, having first been developed in the Franco-German GSM cooperation in 1984 by Friedhelm Hillebrand and Bernard Ghillebaert. It was then, eight years later, that the “Merry Christmas” text was sent.

The difference between shared and dedicated IP addresses

The difference between shared and dedicated IP addresses

Each computer connected to the Internet is assigned a unique IP address for the purposes of communication. An IP address is a 32-bit numeric address usually expressed as 4 numbers from 0-255 separated by dots, for example 192.168.0.123. There are billions of addresses possible, however, the number is finite.
In the Web hosting industry there are two types of IP address...
  • Dedicated IP address (also called static IP) means that a website has its own IP address. Whether you type in your URL or the numeric form of its IP address, both will bring you to the same domain.
  • Shared IP address means that multiple websites share the same address. Web servers can determine by the domain entered in a user's browser which website is being requested. Typing in the IP address will bring you to some kind of generic page instead of the specific site you want.
For Dedicated IP at cheap cost we prefer http://scibero.com

Due to the rapid increase of the number of registered domain names and the finite number of IP addresses, Web hosting providers are forced to use shared IP's when possible. In fact, hundreds of websites often share the same address. Static IP hosting is no longer the norm and usually costs more.

Who needs a dedicated IP address?

Generally, having a website on a shared IP address will not cause you any harm. However, there are a few cases when a static IP is required...
  • Having your own Private SSL Certificate. Secure e-commerce websites need SSL certificates for accepting credit cards online. Web hosts usually offer a shared SSL certificate where clients can share the Web host's SSL. If you are using your Web hosting provider's shared SSL you don't need a static IP.
  • Anonymous FTP. It means that anyone using the FTP software can access files in a special directory of your site. It's called Anonymous FTP because the user name used to access is "anonymous." Many Web hosting providers require a static IP for the anonymous FTP function to work properly.
  • You want to access your website by FTP or Web browser even when the domain name is inaccessible, such as domain name propagation periods.

Dedicated IP hosting and search engines

There has been debate in the SEO industry for awhile regarding whether or not using a dedicated IP address is better than having a shared IP for your website...
  • Some SEOs suppose that there really is no good reason to obtain static IP Web hosting. Your site will not perform any better by having its own static IP.
  • Some others theorize that your choice of dedicated IP hosting vs. shared hosting might slightly affect your rankings (i.e. it's a factor considered by search engines).
  • Yet others suppose that sharing an IP address with known spam or adult sites raises a warning flag with search engines, so some of them may respond by banning the entire IP address from their index.
Most probably, these fears are greatly exaggerated. Since the majority of sites on the Web are hosted via shared IP, it would be unprofitable to search engines to penalize a site based on IP. Search engines are able to ban anything on a domain name instead of an entire IP neighborhood. So it is search engine safe to use a shared IP hosting. Moreover, almost all hosting will eventually be shared in order to preserve IP addresses.

HURD Multi-server Operating System

The GNU Hurd is under active development. Because of that, there is no stable version. We distribute the Hurd sources only through Git at present.
Although it is possible to bootstrap the GNU/Hurd system from the sources by cross-compiling and installing the system software and the basic applications, this is a difficult process. It is not recommended that you do this. Instead, you should get a binary distribution of the GNU/Hurd, which comes with all the GNU software precompiled and an installation routine which is easy to use.
The Debian project has commited to provide such a binary distribution. Debian GNU/Hurd is currently under development and available in the unstable branch of the Debian archive.

Wednesday, 4 January 2012

Dust: A Blocking-Resistant Internet Transport Protocol


Brandon Wiley
School of Information, University of Texas at Austin

Abstract. Censorship of information on the Internet has been an increasing
problem as the methods have become more sophisticated and increasing
resources have been allocated to censor more content. A number of approaches
to counteract Internet censorship have been implemented, from censorshipresistant
publishing systems to anonymizing proxies. A prerequisite for these
systems to function against real attackers is that they also offer blocking
resistance. Dust is proposed as a blocking-resistant Internet protocol designed
to be used alone or in conjunction with existing systems to resist a number of
attacks currently in active use to censor Internet communication. Unlike
previous work in censorship resistance, it does not seek to provide anonymity in
terms of unlinkability of sender and receiver. Instead it provides blocking
resistance against the most common packet filtering techniques currently in use
to impose Internet censorship.
Keywords: censorship resistance, blocking resistance

1 Introduction
Censorship of information on the Internet has been implemented using increasingly
sophisticated techniques. Shallow packet filtering, which can be circumvented by
anonymizing proxies, has been replaced by deep packet inspection technology which
can filter out specific Internet protocols. This has resulted in censorship-resistant
services being entirely blocked or partially blocked through bandwidth throttling.
Traditional approaches to censorship resistance are not effective unless they also
incorporate blocking resistance so that users can communicate with the censorship
circumvention services.
Dust is an Internet protocol designed to resist a number of attacks currently in
active use to censor Internet communication. Dust uses a novel technique for
establishing a secure, blocking-resistant channel for communication over a filtered
channel. Once a channel has been established, Dust packets are indistinguishable from
random packets and so cannot be filtered by normal techniques. Unlike other
encrypted protocols such as SSL/TLS, there is no plaintext handshake which would
allow the protocol to be fingerprinted and therefore blocked or throttled. This solves a
principle weakness of current censorship-resistant systems, which are vulnerable to
deep packet inspection filtering attacks.
1.1 Problem
Traditionally, Internet traffic has been filtered using “shallow packet inspection”
(SPI). With SPI, only packet headers are examined. Since packet headers must be
examined anyway in order to route the packets, this form of filtering has minimal
impact on the scalability of the filtering process, allowing for its widespread use. The
primary means of determining “bad” packets with SPI is to compare the source and
destination IP addresses and ports to IP and port blacklists. The blacklists must be
updated as new target IPs and port are discovered. Circumvention technology, such as
anonymous proxies, bypass this filtering by providing new IPs and ports not in the
blacklist which proxy connections to blacklisted IPs. As the IPs of proxies are
discovered, they are added to the blacklist, so a fresh set of proxy IPs must be made
available and communicated to users periodically. As port blacklists are used to block
certain protocols, such as BitTorrent, regardless of IP, clients use port randomization
to find ports which are not on the blacklist.
Recently, “deep packet inspection” (DPI) techniques have been deployed which
can successfully block or throttle most censorship circumvention solutions [14]. DPI
filters packets by examining the packet payload. DPI can achieve suitable scalability
through random sampling of packets. Another technique in use is to initially send
packets through, but also send them to a background process for analysis. When a bad
packet is found, further packets in that packet stream can be blocked, or the IPs of
participants added to the blacklist. The primary tests that DPI filters apply to packets
are packet length comparison and static string matching, although timing-based
fingerprints are also possible. DPI can not only filter content, but also fingerprint and
filter specific protocols, even encrypted protocols such as SSL/TLS. Encrypted
protocols are vulnerable to fingerprinting based on packet length, timing, and static
string matching of the unencrypted handshake that precedes encrypted
communication. For instance, SSL/TLS uses an unencrypted handshake for cipher
negotiation and key exchange.
The goal of Dust is to provide a transport protocol which cannot be filtered with
DPI. To accomplish this goal it must not be vulnerable to fingerprinting using static
string matching, packet length comparison, or timing profiling. Other attacks such as
IP address matching and coercion of operators are outside of the scope and are best
addressed by use of existing systems such as anonymizing proxies and censorshipresistant
publishing systems running on top of a Dust transport layer.
2 Related Work
Censorship resistance is often discussed in connection with other related concepts
such as anonymity, unlinkability, and unobservability. These terms are sometimes
used interchangeably and sometimes assumed to have specific technical definitions.
Pfitzmann proposed a standardized terminology that defines and relates these terms
[13]. Unlinkability is defined as the indistinguishability of two objects within an
anonymity set. Anonymity is defined as unlinkability between a given object and a
known object of interest. Unobservability is defined as unlinkability of a given object
and a randomly chosen object.
Defining properties such as anonymity and unobservability in terms of
unlinkability opens the way for an information theoretical approach. Hevia offers
such an approach by defining levels of anonymity in terms of what information is
leaked from the system to the attacker [8]. Unlinkability requires the least protection,
hiding only the message contents. Unobservability requires that no information is
leaked whatsoever. Of particular interest is that an anonymous system of any type can
be taken up to the next level of anonymity by adding one of two system design
primitives: encryption and cover traffic.
2.1 Censorship-Resistant Publishing and Anonymizing Proxies
One approach to achieving censorship resistance on the Internet is through
censorship-resistant publishing services such as Publius [18], Tangler [19], and
Mnemosyne [5]. An issue with anonymous publishing systems for practical use is that
even a system that provides maximum protection for stored files must still be
accessible in order for those files to be retrieved. If communication to the document
servers is blocked, then the system is not usable. This requires protection for
communications as well as documents. Serjantov [15] proposed a the solution of
combining anonymous publishing with anonymous proxies by running the publishing
service as a hidden service behind an onion routing network such as Tor [3].
This solution passes on the problem of blocking from the publishing system to the
anonymizing proxy. However, anonymizing proxies are also vulnerable to blocking
attacks. While a network of proxy nodes can provide protection against destination IP
blacklists, they are still vulnerable to various forms of DPI protocol fingerprinting.
This problem is dealt with by Kopsell, who proposes a method to extend existing
anonymous publishing systems to bypass blocking, a property referred to as "blocking
resistance" [9]. In light of the work of Serjantov and Kopsell it is evident that if
anonymous proxies are a necessary component of censorship-resistant publishing and
blocking resistance is a necessary property of anonymous proxies then blocking
resistance is necessary for censorship-resistant publishing.
Kopsel’s threat model contains the assumptions that the attacker has control of
only part of the Internet (the censored zone), that some small amount of unblockable
inbound information can enter the censored zone (perhaps out of band), and that
volunteers outside of the censored zone are willing to help although they may have
differing amounts of bandwidth to contribute. The attacker is assumed to have vast
resources, to control all links outbound from the censored zone to the Internet, and to
be an expert in blocking-resistant system design.
Kopsell’s solution is divided into two parts: access to the blocking-resistant
system, and distributing information about the blocking-resistant system, such as the
IPs of proxy nodes. The nodes in Kopsell’s system are volunteer-run anonymizing
proxies that clients communicate with over a steganographic protocol in order to
obtain access to a censorship-resistant publishing system. Clients obtain an invitation
to the network, including the IP addresses of some proxy nodes, through a lowbandwidth,
unblockable channel into the censored zone. A number of ideas are
proposed for the steganographic data channel such as SSL and SMTP protocols. For
the unblockable channel email is used.
Though Kopsell’s model for blocking resistance solves the real world issues facing
anonymous publication systems and proxies, it relies on the steganographic data and
unblockable invitation channels to have certain properties which may not be met in
actual implementation. The essential purpose of the steganographic channel is to
provide resistance to protocol fingerprinting. Even if the information cannot be
recovered from the steganographic encoding, if it is discovered that the channel
contains steganographically encoded information then it can be summarily blocked. In
other words, the encoding must be undetectable in order to be useful. The constraint
on the invitation channel is that it is completely unblockable, as no particular
protection is given to information distributed on this channel.
Real world of analysis of attacks has shown that SSL is not a suitable encoding
against real attackers as the protocol is easily fingerprinted and summarily blocked or
rate limited [14]. Also, Email is an unsuitable channel for invitations because it is not
unblockable. Recent attacks have blocked the communication of IP addresses of
proxies through email and instant messaging. Given these attacks, what sorts of
channels are suitable for invitations and data to be communicated without being
vulnerable to blocking?
Information theory provides a conceptual framework that offers an answer not just
to the question of blocking resistance but of its relationship to censorship resistance in
general. Censorship-resistance publishing systems provide document unlinkability.
Hevia links the definition of unlinkability to information theory through
indistinguishability of information transmitted on the channels between the system
and the attacker [7] and Boesgaard links document unlinkability to information
theoretic perfect secrecy [2]. So censorship resistance is therefore a form of perfect
secrecy by means of indistinguishability. Pfitzmann defines unobservability as a form
of unlinkability [13] and Perng defines censorship resistance as unobservability [12].
In other words, censorship resistance is unobservability through unlinkability of the
object of interest and a random object, which is equivalent in information theory to
perfect secrecy. Viewed in this context, a censorship-resistant publishing system
would be one in which through observation of the system the attacker cannot obtain
sufficient information to distinguish which documents are accessed by users, in other
words document unobservability. Anonymous proxies add a similar property,
unobservability of the publishing system. The final step, which Kopsell calls blocking
resistance, is unobservability of the anonymous proxy, which requires unobservability
of the protocol by which clients communicate with the proxies. When these properties
are combined, end-to-end unobservability is created from the client to the document.
The ideal communication protocol is therefore one which is unobservable, meaning
that a packet or sequence of packets is indistinguishable from a random packet or
random sequence of packets. This is not necessarily a steganographic encoding. A
steganography encoding is unobservable only so long as the message encoding is not
detectable, regardless of if the message can actually be decoded. Additionally,
steganographic channels can be blocked if the cover channel is blocked. In the cause
of the rate limiting of Tor, SSL was being used as both encryption and steganography.
Rate limiting of the cover occurred because all SSL traffic was summarily rate
limited, causing a rate limiting of the embedded message as well and essentially
failing to provide blocking resistance.
Steganography is not the only option for unobservable protocols. Encryption is an
equally valid means of making messages indistinguishable. Although protocols such
as SSL are encrypted, these protocols often have an unencrypted handshake. This
unencrypted portion of the communication is what is used to fingerprint and block the
protocol. Additionally these protocols may leak other information to the attacker
through packet lengths and timing. However, an encrypted protocol without a
handshake would be resistant to handshake fingerprinting. With sufficiently secure
encryption and a lack of unencrypted handshakes, one encrypted protocol should be
indistinguishable from another encrypted protocol.
In the normal use case for SSL, an entirely encrypted connection would not be
possible as the communicating peers need to perform a public key exchange in order
to determine the session key used to encrypt the conversation. However, unlike a
normal SSL connection, Kopsell’s model allows for a single out-of-band invitation to
be sent prior to the establishment of the data connection.
2.2 Obfuscated Protocols
Several obfuscated protocols have been developed with various goals, including
blocking resistance. For instance, BitTorrent clients have implemented three
encryption protocols in order to prevent filtering and throttling of the BitTorrent
protocol, the strongest of which is Message Stream Encryption (MSE). [11] Analysis
of packet sizes and the direction of packet flow have been shown to identify
connections obfuscated with MSE with 96% accuracy. [7] MSE also uses a cleartext
DH key exchange. However, it does not include static strings in the protocol
handshake as the handshake consists solely of the DH parameters, which are unique
to each connection.
Other obfuscated protocols which are not designed explicitly for blocking
resistance also suffer from cleartext handshakes and often include static strings in the
handshake. Obsfuscated TCP (ObsTCP) has gone through several versions, each
using a different means to communicate the keys, including TCP options, HTTP
headers, and DNS records. [12] The strongest of these is DNS records as TCP options
and HTTP headers are easily blocked using static string matching, while DNS records
are transmitted on a separate connection from the one carrying the data, requiring
correlation between separate connections. However, Sandvine has already
demonstrated this ability in the blocking of BitTorrent traffic by monitoring tracker
protocol traffic to obtain the ports of BitTorrent protocol connections and then
subsequently interfering with the (possibly encrypted) BitTorrent protocol
connections. [17][6] A second connection from the same IP can therefore not be used
as an out-of-band channel for the purpose of blocking resistance. A newer proposal
similar to ObsTCP called tcpcrypt does not blocking resistance as a design goal and
subsequently does worse than ObsTCP/DNS as it uses static strings in the handshake
protocol. [1]
An attempt has been made to address the cleartext handshake problem in the form
of the obfuscated-openssh patch to OpenSSH which encrypts the SSH handshake.
[10] An encrypted handshake for an existing encrypted protocol is a good idea as it is
the minimal amount change necessary to achieve blocking resistance as long as the
protocol already has resistance to packet size and timing attacks. The obfuscatedopenssh
patch essentially implements its own minimal blocking-resistant protocol,
performed before SSH starts and on the same TCP connection. This minimal protocol
is similar to Dust in that it is designed to be resistant to static string and packet size
matching. Unfortunately, it is not truly blocking resistant because it relies on a false
(or perhaps outdated) assumption about the capabilities of filters. The handshake is
encrypted with a key that is generated from a seed that is prepended to the beginning
of the encrypted part of the handshake. The key is generated by iterated hashes of the
seed with the iteration number chosen to be high enough that key generation is slow.
The blocking resistance of this technique relies on key generation not being
sufficiently scalable to do across all connections simultaneously. However, modern
filters are capable of statistically sampling packets and processing them offline to flag
packets and then using those results to block IPs which have sent flagged packets.
[17] This approach is probabilistic in its ability to block connections, but is highly
scalable. Additionally, the introduction of slow key generation may allow for even
less expensive timing attacks in which the only information needed to block a
connection is the timing between the first and second packets.
3 Design
Dust is a protocol designed to provide protocol unobservability in order to implement
Kopsell’s concept of blocking-resistance as a necessary prerequisite to achieve
censorship resistance. The Dust protocol is designed to protect against an attacker that
utilizes Deep Packet Inspection (DPI) to fingerprint protocols for the purpose of
blocking or rate limiting connections. In order to establish protocol unobservability,
all packets consist entirely of encrypted or random single-use bytes so as to be
indistinguishable from each other and random packets.
In order to perform a key exchange without an unencrypted handshake, a novel
out-of-band half-handshake technique is used. As in Kopsell’s model, a peer must
first receive an out-of-band invitation to join the network. This invitation contains the
IP address and public key of the receiver. The sender can then complete the
handshake by sending a single in-band intro packet followed by any number of data
packets encrypted with the session key that was computed in the handshake. The
minimal Dust conversation therefore consists of two in-band packets: one intro
packet, and one data. The protocol allows for these packets to be be chained together
to fit inside a single UDP or TCP packet. The use of a single UDP or TCP packet for
communication prevents timing attacks then the payload is sufficiently small.
3.1 Protocol
In order to accept a connection from an unknown host, a Dust server must first
complete a key exchange with the client. The Dust server first creates an id and secret
pair. The server then sends an out-of-band invite packet to the client, which contains
the server's IP, port, public key, the id, and the secret. The invite is encrypted with a
password and so is indistinguishable from random bytes. It can then be safely
transmitted, along with the password, over an out-of-band channel such as email of
instant messaging. It will not be susceptible to the attacks which block email
communication containing IP addresses because only the password is transmitted
unencrypted. If the invitation channel is under observation by the attacker, and only in
the case that the attacker is specifically attempting to filter Dust packets, then the
password should be sent by another channel that, while it can still be observed by the
attacker, should be uncorrelated with the invitation channel.
In order to complete the handshake, the client uses the IP and port information
from the invite packet to send an intro packet to the server. The intro packet is
prepended with the random, single-use id from the invite packet. The packet is
encrypted with the secret from the invite and contains the public key of the client.
When the server receives a packet from an unknown IP address, it assumes it to be
an intro packet and retrieves the id from the beginning of the packet. This is used to
look up the associated stored secret. The server uses the secret to decrypt the packet,
retrieves the public key of the client, and generates a shared session key. It adds the
session key to its list of known hosts, associated with the IP and port from which the
intro packet was sent. This completes the second phase of the public key exchange.
The client and server can now send and receive encrypted data packets freely. Since
Dust packets an be chained inside of TCP or UDP packets, the intro packet may be
followed immediately by a data packet, which may constitute the entirety of the
conversation.
2 Packet Format
There are three types of Dust packets: invite, intro, and data packets. All three types
of packets build upon the basic Dust packet format as shown in Fig. 1.
Fig. 1. The general Dust packet format. This is also the format for data packets.
In a Dust packet, the MAC is computed using the ciphertext, IV, and a key which
differs depending on the type of packet. Using a MAC allows for the contents of the
packet to be verified and corruption or tampering to be detected. The IV, or
initialization vector, is a single-use random value used to encrypt the ciphertext and
compute the MAC. This ensures that the ciphertext and MAC values will be different
even when sending the same data. Since the IV is random and the MAC is computed
using the IV, both values are effectively random to an observer. The rest of the
packet, excluding the padding, are encrypted into the ciphertext. The ciphertext
includes a timestamp to protect against replay attacks, lengths for the data and
padding, and the data itself. A separate padding length (PL) value is needed because
several Dust packets may be contained inside a single UDP or TCP packet. Finally, a
random number of random bytes of padding are added to randomize the packet
length.
Fig. 2. The format of an invite packet.
An invite packet has the format show in Fig. 2. An invite packet, being a Dust
packet, contains all of the same fields as a data packet, such as MAC, IV, and
padding. The key used in an invite packet to encrypt the ciphertext and compute the
MAC is a PBKDF function using a password and a random salt value. The salt value
is prepended to the packet. The use of both salt and a PBKDF makes it difficult to
decrypt the packet by brute force. This protects the contents of the invite packet
against decryption unless the password is known.
The invite packet includes the information necessary for the client to connect to the
server and complete the handshake. It contains the server’s public key, the IP and port
where the peer can be contacted, a flags byte which specifies if the peer accepts UDP
or TCP connections and whether the IP is an IPv4 or IPv6 address, and an id and
secret pair to be used in the completion of the handshake.
Fig. 3. The format of an intro packet.
An intro packet has the format shown in Fig. 3. The id used in the intro packet is
the same as the one used in the invite packet. This is effectively a single-use random
value as when it was contained in the invite packet it was encrypted and it is only seen
in plaintext in the intro packet. The id is used by the server to link the intro packet to
the stored single-use random secret. This secret is used to encrypt the ciphertext and
to compute the MAC for the intro packet. Since each id is a single-use value, only one
intro packet can be sent for each invite packet received by the client. The rest of the
fields in an intro packet are the same as a general Dust packet. The content of an intro
packet is the public key of the client.
Once the server has obtained the client’s public key from the intro packet, the key
exchange is complete and a shared session key is computed by both sides for use in
encrypting the data packets. The data packets are simply general data packets are
shown in Fig. 1 with no extra fields. In a data packet, the content is the data to be sent
and the key used to encrypt the ciphertext and to compute the MAC is the shared
session key derived from the exchanged public keys and locally stored private keys.
3. Discussion
The Dust protocol provides protocol unobservability by providing protection against
the major methods of protocol fingerprinting through DPI. By encrypting or
randomizing all bytes in all packets, static string matching is defeated. By
randomizing packet length, length matching is defeated. By allow for a full
conversation to be transmitted in a single UDP or TCP packet, timing attacks are
defeated in the case of sufficiently small messages. Additionally, protection is
provided against a number of specific attacks on the protocol. Packet corruption is
defeated by use of a MAC. Replay attacks are defeated within a certain time window
by use of a timestamp. Brute force decryption of invite packets are defeated by use of
salt and a PBKDF. Additionally, any fields that are not encrypted are always
randomized and single-use so that the attacker cannot gain additional information
about the protocol even through long-term protocol observation.
Dust is designed to protect against current attacks, which are based on matching
fingerprints of protocols against blacklists of known protocols. An obvious counteract
against the Dust protocol is to switch from blacklist filtering to whitelist filtering.
This is not addressed for two reasons. First, blacklists are the method currently in
widespread use, whereas whitelists are not. Defeating blacklists is a significant step
towards bypassing existing censorship attempts. Second, an approach which can
bypass a whitelist has disadvantages over an approach designed to bypass blacklists.
Steganography must be employed to encode traffic inside of whitelist-compatible
traffic. As has been discussed, attempting this encoding allows for the possibility of
introducing additional information that could be used for fingerprinting, such as
filtering of the cover. The Dust approach is more simple and efficient to implement
than a steganographic approach and so is preferable when only blacklist filtering is
considered relevant.
4. Limitations
Dust does not attempt to protect against attacks that are already addressed by
anonymizing proxies and censorship-resistance publishing systems. Specifically, no
attempt is made to obscure sender or receiver IP addresses or ports or to protect server
operators from coercion. These attacks would ideally be addressed by a system such
as proposed by Kopsell consisting of an anonymizing proxy network allowing access
to a censorship-resistant publishing system and using the Dust protocol as a blockingresistant
transport protocol.
In order for timestamps to be effective, Dust requires the client and server clocks to
be reasonably synchronized, such as with NTP, as packets with out-of-date
timestamps will be discarded. This is a possible area of future work for the protocol as
clock synchronization may not always be available. Packet sequence numbers, logical
clocks, and application-level clock synchronization are possible options to be
considered for future revisions, although each comes with its own advantages and
disadvantages.
Dust does not provide retransmission or reordering of dropped or reordered packets
and provides no mechanism for acknowledgement of received packets. This is left to
higher level protocols built on top of Dust. The reason for this is that Dust focuses on
a minimal design that provides maximum blocking resistance. An ideal message for
use with the current Dust protocol would fit inside a single UDP packet as this does
not reveal any timing information that can be used for fingerprinting. Additional
layers must be careful to not leak timing information to the attacker. This is
considered to be a separate but related problem in unobservable protocol design.
No explicit mechanism for NAT hole punching is provided in the protocol. For
IPv6 use, hole punching should not be necessary. For IPv4 use, Dust is compatible
with and has been tested with Teredo, which provides end-to-end IPv6 connectivity
on top of IPv4, including NAT hole punching even if both peers are behind NAT. In
the case that Teredo has been blocked, TCP can be used instead of UDP as long as
only the client is behind NAT. As implementing hole punching will complicate the
protocol and open the way to timing attacks, the use case of a IPv4 server behind
NAT without Teredo is left unsupported and would have to be implemented by
individual applications when relevant.
5. Future Work
There are a number of enhancements to the Dust protocol that could protect against
additional attacks. An obvious addition is a reliable transmission protocol on top of
the basic Dust protocol which included packet acknowledgements. This would require
a randomized packet scheduler in order to avoid leaking timing information. Once
implemented, it could protect against packet loss attacks such as dropping the first
packet between any two IPs, which in the case of Dust is the crucial introduction
packet. Once a reliable protocol is available, a secondary key exchange could occur
along with periodic key rotation, allowing for forward secrecy of conversations.
An additional area of research is how to add steganographic encoding to Dust
packets. This would protect against whitelist attacks, but would require careful design
to avoid leaking additional information to the attacker that could be used for
fingerprinting. The problem of the blocking of the cover traffic would also need to be
addressed.
In addition to the extension of the Dust protocol to protect against further attacks,
there is also work to be done in the evaluation of the Dust protocol in real world
scenarios. This is the most immediate next phase of research. Actual Dust traffic will
be evaluated against real world censorship in current use on the Internet and its
performance compared to other protocols used in circumvention technologies. The
distinguishable characteristics of each protocol will be compared to determine their
degree of protocol unobservability in both theoretical and practical terms.
6. Conclusion
Dust fills an important gap in the field of censorship resistance and privacy-enhancing
technologies. By focusing exclusively on blocking resistance it solves real world
attacks on existing censorship-resistant publishing and anonymous proxy systems. An
ideal system combining the Dust protocol for communication, an anonymous proxy
system for routing, and a censorship-resistant publishing system running as a hidden
service would provide end-to-end unobservability and maximum protection against
attackers.
Additionally, the design of the Dust protocol furthers the state of theory in the field
by proposing an information theoretic bridge between censorship-resistant publishing,
anonymous proxies, and blocking-resistant protocols based on the property of
unobservability. A relatively unexplored area of the field is opened by proposing the
centrality of blocking resistance instead of unlinkability in censorship resistance and
the adoption of Kopsell’s attack model in which an attacker which does not have the
power of global eavesdropping.
Those wishing to use the Dust protocol for academic or practical purposes can find
the source code for its implementation at http://github.com/blanu/Dust.
References
1. Bittau, A., Hamburg, M., Handley, M., Mazieres, D., and Boneh, D. The case for ubiquitous
transport-level encryption. 19th USENIX Security Symposium., (2008).
2. Boesgaard, C.: Unlinkability and Redundancy of Content in Anonymous Publication
Systems. http://www.diku.dk/hjemmesider/ansatte/pink/haven/unlink.pdf (2004)
3. Dingledine, R., Mathewson, N., Syverson, P.: Tor: The second-generation onion router. In:
Proceedings of the 13th USENIX Security Symposium. (2004)
4. Dingledine, R. Tor and circumvention: Lessons learned. The 26th Chaos Communication
Congress, (2009).
5. Hand, S., Roscoe, T.: Mnemosyne: Peer-to-Peer Steganographic Storage. In: Druschel, P.,
Kaashoek, F., Rowstron, A. (eds.) IPTPS 2002. pp. 130-140. Springer-Verlag, Berlin (2002)
6. Harrison, D. BEP 008: Tracker Peer Obfuscation. Retrieved from:
http://www.bittorrent.org/beps/bep_0008.html.
7. Hjelmvik, E and John, W. Breaking and Improving Protocol Obfuscation. Department of
Computer Science and Engineering, Chalmers University of Technology, Technical Report
No. 2010-05, ISSN 1652- 926X. (2010)
8. Hevia, A., Micciancio, D.: An Indistinguishability-Based Characterization of Anonymous
Channels. In: Borisov, N., Goldberg, I. (eds.) PET 2008. pp. 24-43. Springer-Verlag, Berlin
(2008)
9. Kopsell, S., Hilling, U.: How to Achieve Blocking Resistance for Existing Systems
Enabling Anonymous Web Surfing. In: Proceedings of the Workshop on Privacy in the
Electronic Society. pp. 103-115. ACM Press, New York (2004)
10. Leidl, B. Obfuscated-OpenSSH README. Retrieved from:
https://github.com/brl/obfuscated-openssh/blob/master/README.obfuscation. (2010)
11. Message Stream Encryption. http://wiki.vuze.com/w/Message_Stream_Encryption (2006)
12. Obfuscated TCP. Wikipedia. Retrieved from:
http://en.wikipedia.org/wiki/Obfuscated_TCP. (2010)
12. Perng, G., Reiter, M.K., Wang, Chenxi: Censorship Resistance Revisited. In: Barni, M. (ed.)
IH 2005. pp. 62-76. Springer-Verlag, Berlin (2005)
13. Pfitzmann, A., Kohntopp, M.: Anonymity, Unobservability, and Pseudonymity – A Proposal
for Terminology. In: Federrath, H. (ed.) Anonymity 2001. pp. 1-9. Springer-Verlag, Berlin
(2001)
14. Sennhauser, M.: The State of Iranian Communication.
http://diode.mbrez.com/docs/SoIN.pdf (2009)
15. Serjantov, A.: Anonymizing Censorship Resistant Systems. In: Druschel, P., Kaashoek, F.,
Rowstron, A. (eds.) IPTPS 2002. pp. 111-120. Springer-Verlag, Berlin (2002)
16. Topolsky, R. Comments of Robert M. Topolsky In the Matter of Petition of Free Press et al.
for Declaratory Ruling that Degrading an Internet Application Violates the FCC’s Internet
Policy Statement and Does Not Meet an Exception for “Reasonable Network Management”.
Federal Communications Commission WC Docket No. 07-52, 08-7. (2008)
17. Using NetFlow Filtering or Sampling to Select the Network Traffic to Track. Retrieved
from:
http://www.cisco.com/en/US/docs/ios/netflow/configuration/guide/nflow_filt_samp_traff.ht
ml#wp1064305. (2006)
18. Waldman, M., Rubin, A.D., Cranor, L.F.: Publius: A Robust, Tamper-evident, Censorshipresistant
Web Publishing System. In: 9th USENIX Security Symposium.
19. Waldman, M., Mazieres, D.: Tangler: a Censorship-resistant Publishing System Based on
Document Entanglements. In: Computer and Communications Security. pp. 126-135. ACM
Press, New York (2001)

Technologies through human eye

Now a days, technologies are updating time to time , but no one will try to hear peoples opinion about new technologies , here we are introducing a new platform to express your opinion about latest technologies and we will post here something new to the industry , also your questions are welcome