Showing posts with label facebook. Show all posts
Showing posts with label facebook. Show all posts

Tuesday, September 13, 2011

World's Most Innovative Companies

Bangalore: We have been using various innovative products in our daily life. Innovation is wide ranging in nature and touching so many parts of an organization and areas of potential opportunity. While the staggering array of possibilities can be intimidating, innovation is most effective when implementing this diversity and including as many points of perspective as possible. The top innovative companies listed down have worked to give their best for the consumers so no wonder they top the list of the world's most innovative companies.

1.Google:


The search engine giant tops the list of most innovative companies in the world. Their creativity in the layouts, display ads, Google talk, Google docs and spreadsheets, Google calendar, Google checkout, Picasa web albums and so many. The Google Picasa and other apps which has made the search engine very user friendly and Google has answer for every question in people's mind.

2. Apple:


Steve jobs innovations have made a mark in the world of technology. The former CEO of apple gifted with great intellectual brain has brought the company's revenue to the
top which is more than U.S. government treasury. The iPod saga is still in demand and will be as their planning and new innovations have always excited people. The graphical user interface, iMac, iTunes, Mac OS X and iPhones are the devices which have lead to rebellion of the war between the top tech giants. They are the king of the this gadget generation.

3. Nokia:


The Finnish company which has created a strong goodwill for the quality of their products top the chart. Nokia's first phone, the 1100 series, 2100 model that int
roduced Nokia ring tone are unforgettable, they brought mobile phones to every corner of the world. Their Smartphones may not be upto the mark of apple phones but they are different from them in applications and usage. They have their own mission and vision which is to maintain their good will rather than just selling their product.

4. Facebook:


Mark Zuckerberg the young chap who created the site when he was a student in Harvard. This website has been called as one of the worst innovated piece of the decade
by many critics but the demand and usage of this social site has proved they are the best. Facebook enjoys monopoly because of their applications and layout and advancement which bring people together from every corner of the world. This site is very easy to use, public search; the privacy settings are perfect till the users know their limits. The surveys conducted have proved Facebook to be one of the best innovative website for their virtual gaming, chatting and status updates.

5. Disney:


Disney is an art of innovation. Their animations for movies, cartoons, the Disneyland have attracted billions of people who can never forget the founder, Walt Disney.
It's a land of innovations where fun, illusions, their works have been memories for millions.

6. News corp.:


Rupert Murdoch is no wonder called as a media Mughal. The turning point brought by him in the world of journalism and media. The fox, Wall Street journalism, ac
quiring MySpace, The New York Times, and The Sun have dominated the whole print media which lead to the closure of thousands of newspaper in U.S. Their E-papers is a great hit and the fox channel and movies are no less popular.

7. Nike:


The sporty generation is in love with Nike for their amazing products which gives the heights of comfort and richness. The new Nike air max 360 lets the person to run on a
ir the latest innovation which has created a buzz among all the youth. The idea of full air cushioning system designed for the runners has been called as the most amazing shoes in the world by the users. Their jerseys, bags, shoes and deodorants have always had a great demand for their comfort.

8. Samsung:


Samsung Electronics has been honored with 37 CES 2011 Innovation Awards and so no wonder they are also listed down here. Samsung's HT-C9950W 3D Blu-ray home theater
system, The Samsung NX100 camera, NX 20-50mm, the world's first intelligent i-Function lens, Samsung's 55-inch 2-sided Edge-lit 3D LED panel, Samsung WF520 Front-Loader Washing Machines they are more famous for their electronic appliances than their mobile and tablets.

9. Amazon:



The business model of Amazon and their innovation techniques are unique and highly creative. Their goal is to be highly informative rather than making their websites l
ook attractive. Professionalism in their work is the foremost aspect to be seen. The elastic compute cloud has been in demand among the top organizations. It is one of the most informative website on internet.

10. Proctor and gamble:


They give a wide list of options for people as they manufacture various goods for domestic purpose. They manufacture the normal home products
which has gained great reputation among consumers. The best products by P&G are Head and Shoulders, Gillette, Pantene, Actonel which shows that they are more into innovation of beauty and grooming products which has great demand.

Tuesday, July 19, 2011

Facebook bans Google+ ad

Bangalore: Alarmed by the great success of Google +, social networking giant Facebook has reportedly removed a Google+ ad and banned the creator from putting ads on the site. The ad that was created by a Facebook user invited people to connect with its creator on Google+; however, Facebook decided to ban ads on its site that promote its rival Google+.

Internet geek and a web developer Michael Lee Johnson placed an ad on Facebook circulating for Google+; however, his idea did not work out as he expected as Facebook banned his ad account. The simple-looking ad had the headline which said, "Add Micheal to Google+." The text of the ad read, "If you're lucky enough to have a Google+ account, add Michael Lee Johnson, Internet Geek, App Developer, Technological Virtuoso."

However, upon banning his ad, he received a message back from Facebook saying, "Your account has been disabled. All of your adverts have been stopped and should not be run again on the site under any circumstances. Generally, we disable an account if too many of its adverts violate our Terms of Use or Advertising guidelines. Unfortunately we cannot provide you with the specific violations that have been deemed abusive. Please review our Terms of Use and Advertising guidelines if you have any further questions."

The Facebook guidelines on advertising clearly state that they may refuse ads at any time for any reason, including our determination that they promote competing products or services or negatively affect our business or relationship with our users.

Harry Potter' soars high in technology

Bangalore: One of the most awaited movies of the year is 'Harry Potter and the Deathly Hallows Part 2' and with more than $40 million tickets sold in advance it definitely expected to make it even bigger. It has already recorded a business of $168.6 million in the U.S. and Canada in just three days. The movie also had the biggest international debut ever, grossing a magical $307 million overseas in 59 foreign countries. With awesome animations and mind boggling technology it is definitely heading for a record-breaking business in the box office.


The two things that make the difference this time, is its awesome animation and best utilization of Facebook and twitter in promoting the movie. Back in 2001 when Harry Potter first movie was realized Facebook and twitter never existed, but in 2011 Harry potter made the best out of the social networking sites to reach out to its fans.

Animation


The awesome fantasy world is delivered and a culminated work of a combination of companies like Rising Sun Pictures, Double Negative, Cinesite, Framestores and Industrial Light and Magic. The visual effects put together are really fantastic as give a real like feeling and takes you to a world of magic, witches and heroism. Technology has taken a leap for sure and this was quite evident in the last series of Harry potter and it was a very clean and neat show put by the Warner bros for sure.

Facebook

'Harry Potter and the Deathly Hallows Part 2' has kicked up the social media marketing. Warner Bros has long maintained a solid Facebook presence for the Harry Potter film seri
es as the studio ramped up its efforts in engagement and in Facebook app features. Nearly 29 million users have 'liked' the Harry Potter Facebook Page. In the week before the premiere which took place on July 7, 2011 the Harry Potter Facebook Page gained nearly 100,000 new fans per day. The frequent posts of images, behind-the-scenes tidbits, interviews with stars, links to coverage on other media outlets and movie trailers have lifted the fan following and likes numbers. Warner Bros has created local Facebook fan pages for a variety of different countries and languages.

Twitter

The HarryPotter Film Twitter account is not as active as the Facebook Page but it had recorded 343,000 followers, but it has done a pretty good job in engaging the users by sharing links to interview and media articles and posting photos. Photos from the event of the premier night were also live-tweeted using TwitPic.

It seems all the strategies have paid off well for the producers as it is busy making waves in the industry.

Friday, July 8, 2011

Facebook trapped in MySQL ‘fate worse than death

According to database pioneer Michael Stonebraker, Facebook is operating a huge, complex MySQL implementation equivalent to “a fate worse than death,” and the only way out is “bite the bullet and rewrite everything.”

Not that it’s necessarily Facebook’s fault, though. Stonebraker says the social network’s predicament is all too common among web startups that start small and grow to epic proportions.

During an interview this week, Stonebraker explained to me that Facebook has split its MySQL database into 4,000 shards in order to handle the site’s massive data volume, and is running 9,000 instances of memcached in order to keep up with the number of transactions the database must serve. I’m checking with Facebook to verify the accuracy of those numbers, but Facebook’s history with MySQL is no mystery.

The oft-quoted statistic from 2008 is that the site had 1,800 servers dedicated to MySQL and 805 servers dedicated to memcached, although multiple MySQL shards and memcached instances can run on a single server. Facebook even maintains a MySQL at Facebook page dedicated to updating readers on the progress of its extensive work to make the database scale along with the site.

The widely accepted problem with MySQL is that it wasn’t built for webscale applications or those that must handle excessive transaction volumes. Stonebraker said the problem with MySQL and other SQL databases is that they consume too many resources for overhead tasks (e.g., maintaining ACID compliance and handling multithreading) and relatively few on actually finding and serving data. This might be fine for a small application with a small data set, but it quickly becomes too much to handle as data and transaction volumes grow.

This is a problem for a company like Facebook because it has so much user data, and because every user clicking “Like,” updating his status, joining a new group or otherwise interacting with the site constitutes a transaction its MySQL database has to process. Every second a user has to wait while a Facebook service calls the database is time that user might spend wondering if it’s worth the wait.
Not just a Facebook problem

In Stonebraker’s opinion, “old SQL (as he calls it) is good for nothing” and needs to be “sent to the home for retired software.” After all, he explained, SQL was created decades ago before the web, mobile devices and sensors forever changed how and how often databases are accessed.

But products such as MySQL are also open-source and free, and SQL skills aren’t hard to come by. This means, Stonebraker says, that when web startups decide they need to build a product in a hurry, MySQL is natural choice. But then they hit that hockey-stick-like growth rate like Facebook did, and they don’t really have the time to re-engineer the service from the database up. Instead, he said, they end up applying Band-Aid fixes that solve problems as they occur, but that never really fix the underlying problem of an inadequate data-management strategy.

There have been various attempts to overcome SQL’s performance and scalability problems, including the buzzworthy NoSQL movement that burst onto the scene a couple of years ago. However, it was quickly discovered that while NoSQL might be faster and scale better, it did so at the expense of ACID consistency. As I explained in a post earlier this year about Citrusleaf, a NoSQL provider claiming to maintain ACID properties:

ACID is an acronym for “Atomicity, Consistency, Isolation, Durability” — a relatively complicated way of saying transactions are performed reliably and accurately, which can be very important in situations like e-commerce, where every transaction relies on the accuracy of the data set.

Stonebraker thinks sacrificing ACID is a “terrible idea,” and, he noted, NoSQL databases end up only being marginally faster because they require writing certain consistency and other functions into the application’s business logic.

Stonebraker added, though, that NoSQL is a fine option for storing and serving unstructured or semi-structured data such as documents, which aren’t really suitable for relational databases. Facebook, for example, created Cassandra for certain tasks and also uses the Hadoop-based HBase heavily, but it’s still a MySQL shop for much of its core needs.
Is ‘NewSQL’ the cure?

But Stonebraker — an entrepreneur as much as a computer scientist — has an answer for the shortcoming of both “old SQL” and NoSQL. It’s called NewSQL (a term coined by 451 Group analyst Matthew Aslett) or scalable SQL, as I’ve referred to it in the past. Pushed by companies such as Xeround, Clustrix, NimbusDB, GenieDB and Stonebraker’s own VoltDB, NewSQL products maintain ACID properties while eliminating most of the other functions that slow legacy SQL performance. VoltDB, an online-transaction processing (OLTP) database, utilizes a number of methods to improve speed, including by running entirely in-memory instead of on disk.

It would be easy to accuse Stonebraker of tooting his own horn, but NewSQL vendors have been garnering lots of attention, investment and customers over the past year. There’s no guarantee they’re the solution for Facebook’s MySQL woes — the complexity of Facebook’s architecture and the company’s penchant for open source being among the reasons — but perhaps NewSQL will help the next generation of web startups avoid falling into the pitfalls of their predecessors. Until, that is, it, too, becomes a relic of the Web 3.0 era.

Thursday, July 7, 2011

Most popular person on Google+

Bangalore: Mark Zuckerberg, the founder and chief executive of Facebook, is the most popular person on Google (PLUS), reports New York Times. He had nearly 35,000 people following his updates on the service.

Facebook or Google have not yet confirmed if Zuckerberg's profile was real. But his account is linked with those of several Facebook executives who are also on Google (PLUS), including Bret Taylor, the chief technology officer, and Sam Lessin , a product manager, suggesting that it is authentic. Zuckerberg has yet to post anything that can be seen by the wider public.

The new service, Google (PLUS) is less than a week old and is still not yet widely available to the public. But access to the service, which lets people share photos, links, status updates and video chats with groups of friends, is already in high demand among early adopters who are eager to play with its features. That includes Zuckerberg, who apparently signed up to keep tabs on his new adversary.

Facebook also has the world's biggest map of the connections between people. It is not possible to transfer data about one's Facebook connections into Google (PLUS), so most users will have to rebuild that list on the new service.

Facebook, emails 'taking over family life'

LONDON: In today's world, they may be the best way to keep in contact with others, but emails and social networking websites like Facebook are disrupting family life, a new study has claimed.

Researchers at Cambridge University have found that family life is being disrupted as parents and kids are overwhelmed by the huge volumes of emails and social messaging updates they are handling each day.

As a result, one in three Britons is now desperate to cut down our use of Twitter and Facebook as well as emails, a newspaper reported.

Surprisingly, the study, based on a survey, also found children as well as adults preferred to communicate face to face. More than half of all families said a "technology-free" time is important and a third of parents said technology had disrupted family life.

Amongst children aged 10 to 18, who have grown up with new technology, 38 per cent of respondents admitted to feeling overwhelmed by the volume of messages. And, similar numbers of adults felt the same way.

The study also discovered that 43 per cent of children and 33 per cent of adults are taking steps to reduce their reliance on messaging, text and networking. Both groups now preferred interacting face-to-face rather than through the internet or by mobile phone.

Professor John Clarkson , who led the study, said: "There is much discussion about whether communications technology is affecting us for the better or worse.

"The research has shown that communications technology is seen by most as a positive tool but there are examples where people aren't managing usage as well as they could be -- it's not necessarily the amount but the way it's used."

Sunday, January 9, 2011

Online advertising

Online advertising is a form of promotion that uses the Internet and World Wide Web for the expressed purpose of delivering marketing messages to attract customers. Examples of online advertising include contextual ads on search engine results pages, banner ads, Rich Media Ads, Social network advertising, interstitial ads, online classified advertising, advertising networks and e-mail marketing, including e-mail spam.

There are different types of online advertising forms :
1)Print Advertising – Newspapers, Magazines, Brochures, Fliers
2)Outdoor Advertising – Billboards, Kiosks, Tradeshows and Events
3)Broadcast advertising – Television, Radio and the Internet
4)Covert Advertising – Advertising in Movies
5)Surrogate Advertising – Advertising Indirectly
6)Public Service Advertising – Advertising for Social Causes
7)Celebrity Advertising

Friday, December 10, 2010

Rectifiers

Rectifier
Rectifier is an electrical device which converts alternating current into direct current. This process of converting ac to dc is known as rectification
Rectifiers are used as components of power supplies and as detectors of radio signals. Rectifiers may be made of solid state diodes, vacuum tube diodes, and other components.
The circuit which performs the function of converting dc to ac is known as inverter.
Half-wave Rectifier







Types of Full-wave rectifier
• 1)Center-trapped Rectifier
2) Bridge Rectifier


Applications
• The primary application of rectifiers is to derive usable DC power from an AC supply. Virtually all electronics except simple motor circuits such as fans require a DC supply but mains power is AC so rectifiers are used inside the power supply of all electronics.

Thursday, December 9, 2010

Bluetooth Hacking

Bluetooth is a wireless technology that enables any electrical device to wirelessly communicate in the 2.5 GHz ISM (license free) frequency band. It allows devices such as mobile phones, headsets, PDA's and portable computers to communicate and send data to each other without the need for wires or cables to link to devices together. It has been specifically designed as a low cost, low power, radio technology, which is particularly suited to the short range Personal Area Network (PAN) application. (It is the design focus on low cost, low size and low power, which distinguishes it from the IEEE 802.11 wireless LAN technology).
The Main Features of Bluetooth:
- Operates in the 2.4GHz frequency band without a license for wireless communication.
- Real-time data transfer usually possible between 10-100m.
- Close proximity not required as with infrared data (IrDA) communication devices as Bluetooth doesn't suffer from interference from obstacles such as walls.
- Supports both point-to-point wireless connections without cables between mobile phones and personal computers, as well as point-to-multipoint connections to enable ad hoc local wireless networks.
- It uses unlicensed ISM (Industrial, Scientific and Medical) band, 2400 - 2483.5 MHz, Modulation - Gaussian frequency shift keying,. Frequency Hopping Spread Spectrum - 1600 hops/sec, amongst 79 channels, spaced at 1 MHz separation.

When and How was it Conceived?
Bluetooth was originally conceived by Ericsson in 1994, when they began a study to examine alternatives to cables that linked mobile phone accessories.
Where did the Name Come From?
Bluetooth was named after Herald Blatand (or Bluetooth), a tenth century Danish Viking king who had united and controlled large parts of Scandinavia which are today Denmark and Norway. The name was chosen to highlight the potential of the technology to unify the telecommunications and computing industries
SIG Membership?
Since its original foundation, the Bluetooth SIG has transitioned into a not-for-profit trade association, Bluetooth SIG, Inc. Membership is open to all companies wishing to develop, market and promote Bluetooth products at two levels - Associate and Adopter Members.
Bluetooth Security
1 The Bluetooth pairing & authentication process
The Bluetooth initialization procedures consist of 3 or 4 steps:
1. Creation of an initialization key (Kinit).
2. Creation of a link key (Kab).
3. Authentication.
After the 3 pairing steps are completed, the devices can derive an encryption key to hide all future communication in an optional fourth step.
Before the pairing process can begin, the PIN code must be entered into both Bluetooth devices. Note that in some devices (like wireless earphones) the PIN is fixed and cannot be changed. In such cases, the fixed PIN is entered into the peer device. If two devices have a fixed PIN, they cannot be paired, and therefore cannot communicate. In the following sections we go into the details of the steps of the pairing process.

1 Creation of Kinit
The Kinit key is created using the E22 algorithm, whose inputs are:
1. a BD_ADDR.
2. the PIN code and its length.
3. a 128 bit random number IN_RAND.
This algorithm outputs a 128-bit word, which is referred to as the initialization key (Kinit).
Figure 1 describes how Kinit is generated using E22. Note that the PIN code is available at both Bluetooth devices, and the 128 bit IN_RAND is transmitted in plaintext. As for the BD_ADDR: if one of the devices has a fixed PIN, they use the BD_ADDR of the peer device. If both have a variable PIN, they use the PIN of the slave device that receives the IN_RAND. In Figure 1, if both devices have a variable PIN, BD_ADDRB shall be used. The Bluetooth device address can be obtained via an inquiry routine by a device. This is usually done before connection establishment begins
This initialization key (Kinit) is used only during the pairing process. Upon the creation of the link key (Kab), the Kinit key is discarded.

Figure 1: Generation of Kinit using E22

2.1.2 Creation of Kab
After creating the initialization key, the devices create the link key Kab. The devices use the initialization key to exchange two new 128 bit random words, known as LK_RANDA and LK_RANDB. Each device selects a random 128 bit word and sends it to the other device after bitwise xoring it with Kinit. Since both devices know Kinit, each device now holds both random numbers LK_RANDA and LK_RANDB. Using the E21 algorithm, both devices create the link key Kab. The inputs of E21 algorithm are:
1. a BD_ADDR.
2. The 128 bit random number LK_RAND.
Note that E21 is used twice is each device, with two sets of inputs. Figure 2 describes how the link key Kab is created.

Figure 2: Generation of Kab using E21

2.1.3 Mutual authentication
Upon creation of the link key Kab, mutual authentication is performed. This process is based on a challenge-response scheme. One of the devices, the verifier, randomizes and sends (in plaintext) a 128 bit word called AU_RANDA. The other device, the claimant, calculates a 32 bit word called SRES using an algorithm E1. The claimant sends the 32 bit SRES word as a reply to the verifier, who verifies (by performing the same calculations) the response word. If the response word is successful, the verifier and the claimant change roles and repeat the entire process. Figure 3 describes the process of mutual authentication. The inputs to E1 are:
1. The random word AU_RANDA.
2. The link key Kab.
3. Its own Bluetooth device address (BD_ADDRB).
Note that as a side effect of the authentication process, both peers calculate a 96 bit word called ACO. This word is optionally used during the creation of the encryption key. The creation of this encryption key exceeds our primary discussion and shall not be described in this paper.

Figure 3: Mutual authentication process using E1

2.2 Bluetooth cryptographic primitives
As we described above, the Bluetooth pairing and authentication process uses three algorithms: E22, E21, E1. All of these algorithms are based on the SAFER+ cipher with some modifications. Here we describe features of SAFER+ that are relevant to our attack.

2.2.1 Description of SAFER+
SAFER+ is a block cipher with a block size of 128 bits and three different key lengths: 128, 192 and 256 bits. Bluetooth uses SAFER+ with 128 bit key length. In this mode, SAFER+ consists of:
1. KSA - A key scheduling algorithm that produces 17 different 128-bit subkeys.
2. 8 identical rounds.
3. An output transformation - which is implemented as a xor between the output of the last round and the last subkey.
Figure 4 describes the inner design of SAFER+, as it is used in Bluetooth.

Figure 4: Inner design of SAFER+
The key scheduling algorithm (KSA)
The key scheduling algorithm used in SAFER+ produces 17 different 128-bit subkeys, denoted K1 to K17. Each SAFER+ round uses 2 subkeys, and the last key is used in the SAFER+ output transformation. The important details for our discussion are that in each step of the KSA, each byte is cyclic-rotated left by 3 bit positions, and 16 bytes (out of 17) are selected for the output subkey. In addition, a 128 bit bias vector, different in each step, is added to the selected output bytes.
The SAFER+ Round
As depicted, SAFER+ consists of 8 identical rounds. Each round calculates a 128 bit word out of two subkeys and a 128 bit input word from the previous round.
3 Bluetooth PIN Cracking


3.1 The Basic Attack:

Table 1: List of messages sent during the pairing and authentication process. ``A'' and ``B'' denote the two Bluetooth devices.
# Src Dst Data Length Notes
1 A B IN_RAND 128 bit plaintext
2 A B LK_RANDA 128 bit XORed with Kinit
3 B A LK_RANDB 128 bit XORed with Kinit
4 A B AU_RANDA 128 bit plaintext
5 B A SRES 32 bit plaintext
6 B A AU_RANDB 128 bit plaintext
7 A B SRES 32 bit plaintext


Assume that the attacker eavesdropped on an entire pairing and authentication process, and saved all the messages (see Table 1). The attacker can now use a brute force algorithm to find the PIN used. The attacker enumerates all possible values of the PIN. Knowing IN_RAND and the BD_ADDR, the attacker runs E22 with those inputs and the guessed PIN, and finds a hypothesis for Kinit. The attacker can now use this hypothesis of the initialization key, to decode messages 2 and 3. Messages 2 and 3 contain enough information to perform the calculation of the link key Kab, giving the attacker a hypothesis of Kab. The attacker now uses the data in the last 4 messages to test the hypothesis: Using Kab and the transmitted AU_RANDA (message 4), the attacker calculates SRES and compares it to the data of message 5. If necessary, the attacker can use the value of messages 6 and 7 to re-verify the hypothesis Kab until the correct PIN is found. Figure 6 describes the entire process of PIN cracking.
Note that the attack, as described, is only fully successful against PIN values of under 64 bits. If the PIN is longer, then with high probability there will be multiple PIN candidates, since the two SRES values only provide 64 bits of data to test against. A 64 bit PIN is equivalent to a 19 decimal digits PIN.


Figure 6: The Basic Attack Structure.


4 The Re-Pairing attack

4.1 Background and motivation
This section describes an additional attack on Bluetooth devices that is useful when used in conjunction with the primary attack described in Section 3. Recall that the primary attack is only applicable if the attacker has eavesdropped on the entire process of pairing and authentication. This is a major limitation since the pairing process is rarely repeated. Once the link key Kab is created, each Bluetooth device stores it for possible future communication with the peer device. If at a later point in time the device initiates communication with the same peer - the stored link key is used and the pairing process is skipped. Our second attack exploits the connection establishment protocol to force the communicating devices to repeat the pairing process. This allows the attacker to record all the messages and crack the PIN using the primary attack described in this paper.

4.2 Attack details
Assume that two Bluetooth devices that have already been paired before now intend to establish communication again. This means that they don't need to create the link key Kab again, since they have already created and stored it before. They proceed directly to the Authentication phase (Recall Figure 3). We describe three different methods that can be used to force the devices to repeat the pairing process. The efficiency of each method depends on the implementation of the Bluetooth core in the device under attack. These methods appear in order of efficiency:
1. Since the devices skipped the pairing process and proceeded directly to the Authentication phase, the master device sends the slave an AU_RAND message, and expects the SRES message in return. Note that Bluetooth specifications allow a Bluetooth device to forget a link key. In such a case, the slave sends an LMP_not_accepted message in return, to let the master know it has forgotten the link key. Therefore, after the master device has sent the AU_RAND message to the slave, the attacker injects a LMP_not_accepted message toward the master. The master will be convinced that the slave has lost the link key and pairing will be restarted. Restarting the pairing procedure causes the master to discard the link key. This assures pairing must be done before devices can authenticate again.
2. At the beginning of the Authentication phase, the master device is supposed to send the AU_RAND to the slave. If before doing so, the attacker injects a IN_RAND message toward the slave, the slave device will be convinced the master has lost the link key and pairing is restarted. This will cause the connection establishment to restart.
3. During the Authentication phase, the master device sends the slave an AU_RAND message, and expects a SRES message in return. If, after the master has sent the AU_RAND message, an attacker injects a random SRES message toward the master, this will cause the Authentication phase to restart, and repeated attempts will be made. At some point, after a certain number of failed authentication attempts, the master device is expected to declare that the authentication procedure has failed (implementation dependent) and initiate pairing.
4. The three methods described above cause one of the devices to discard its link key. This assures the pairing process will occur during the next connection establishment, so the attacker will be able to eavesdrop on the entire process, and use the method described in Section 3 to crack the PIN.
In order to make the attack ``online'', the attacker can save all the messages transferred between the devices after the pairing is complete. After breaking the PIN (0.06-0.3 sec for a 4 digit PIN), the attacker can decode the saved messages, and continue to eavesdrop and decode the communication on the fly. Since Bluetooth supports a bit rate of 1 Megabit per second, a 40KB buffer is more than enough for the common case of a 4 digit PIN.
Notes:
1. The Bluetooth specification does allow devices to forget link keys and to require repeating the pairing process. This fact makes the re-pairing attack applicable.
2. Re-Pairing is an active attack, that requires the attacker to inject a specific message at a precise point in the protocol. This is most likely needs a custom Bluetooth device since off-the-shelf components will be unable to support such behavior.
3. If the slave device verifies that the message it receives is from the correct BD_ADDR, then the attack requires the injected message to have its source BD_ADDR ``spoofed'' - again requiring custom hardware.
4. If the attack is successful, the Bluetooth user will need to enter the PIN again - so a suspicious user may realize that his Bluetooth device is under attack and refuse to enter the PIN.
5 Countermeasures
This section details the countermeasures one should consider when using a Bluetooth device. These countermeasures will reduce the probability of being subjected to both attacks and the vulnerability to these attacks.

1. Since Bluetooth is a wireless technology, it is very difficult to avoid Bluetooth signals from leaking outside the desired boundaries. Therefore, one should follow the recommendation in the Bluetooth standard and refrain from entering the PIN into the Bluetooth device for pairing as much as possible. This reduces the risk of an attacker eavesdropping on the pairing process and finding the PIN used.
Most Bluetooth devices save the link key (Kab) in non-volatile memory for future use. This way, when the same Bluetooth devices wish to communicate again, they use the stored link key. However, there is another mode of work, which requires entering the PIN into both devices every time they wish to communicate, even if they have already been paired before. This mode gives a false sense of security! Starting the pairing process every time increases the probability of an attacker eavesdropping on the messages transferred. We suggest not to use this mode of work.
2. Finally, the PIN length ranges from 8 to 128 bits. Most manufacturers use a 4 digit PIN and supply it with the device. Obviously, customers should demand the ability to use longer PINs.
3.Instead of passing messages in plain text, they should be encoded before transmission.

The Future of Bluetooth
The next version of Bluetooth, currently code named Lisbon, includes a number of features to increase security, usability and value of Bluetooth. The following features are defined:
- Atomic Encryption Change
- Extended Inquiry Response
- Sniff Subrating QoS Improvements
- Simple Pairing
Types of attacks in Bluetooth
The SNARF attack:
It is possible, on some makes of device, to connect to the device without alerting the owner of the target device of the request, and gain access to restricted portions of the stored data therein, including the entire phonebook (and any images or other data associated with the entries), calendar, realtime clock, business card, properties, change log, IMEI (International Mobile Equipment Identity [6], which uniquely identifies the phone to the mobile network, and is used in illegal phone 'cloning'). This is normally only possible if the device is in "discoverable" or "visible" mode, but there are tools available on the Internet that allow even this safety net to be bypassed.
The BACKDOOR attack:
The backdoor attack involves establishing a trust relationship through the "pairing" mechanism, but ensuring that it no longer appears in the target's register of paired devices. In this way, unless the owner is actually observing their device at the precise moment a connection is established. Device grants access to services. This means that not only can data be retrieved from the phone, but other services, such as modems or Internet, WAP and GPRS gateways may be accessed without the owner's knowledge or consent. Indications are that once the backdoor is installed, the above SNARF attack will function on devices that previously denied access, and without the restrictions of a plain SNARF attack, so we strongly suspect that the other services will prove to be available also.
The BLUEBUG attack:
The bluebug attack creates a serial profile connection to the device, thereby giving full access to the AT command set, which can then be exploited using standard off the shelf tools, such as PPP for networking and gnokii for messaging, contact management, diverts and initiating calls. With this facility, it is possible to use the phone to initiate calls to premium rate numbers, send sms messages, read sms messages, connect to data services such as the Internet, and even monitor conversations in the vicinity of the phone. This latter is done via a voice call over the GSM network, so the listening post can be anywhere in the world. Bluetooth access is only required for a few seconds in order to set up the call. Call forwarding diverts can be set up, allowing the owner's incoming calls to be intercepted, either to provide a channel for calls to more expensive destinations, or for identity theft by impersonation of the victim.
Scanning for Bluetooth addresses
The Bluetooth address itself is a unique 48bit device identifier, where the first 3 bytes of the address are assigned to a specific manufacturer by the IEEE (www.ieee.org/), and the last 3 bytes are freely allocated by the manufacturer. For example, the hexadecimal representation of a Sony Ericsson P900 phone's Bluetooth address may look like 00:0A:D9:EB:66:C7, where the first 3 bytes of this address (00:0A:D9) are registered to Sony Ericsson by the IEEE, meaning that all P900 phones will have their Bluetooth address starting with same 3 bytes. The last 3 bytes (EB:66:C7) of the sample address are assigned to this device by Sony Ericsson and should be different for each P900 phone -- but is not always, unfortunately.
In theory, enabling the non-discoverable mode on a Bluetooth device should protect users from unauthorized connections, yet in practice it is still quite possible to find these devices. There are software tools available which allow brute-force discovery of non-discoverable devices. An example of such an application is RedFang by Ollie Whitehouse, a small application which simply tries to connect to a unique Bluetooth address one by one, until finally a hidden device answers the request sent that was sent to that particular address. Author's initial test is a minimum of 6 seconds to achieve a good level of accuracy (it varies from 2.5 to 10 seconds, on average). It is certainly possible to find a hidden device in less than 3 seconds, The address space used by Sony Ericsson has 16,777,216 possible addresses. If we assume 6 seconds are required per device, the total scan would take us 1165 days, meaning we would need more than 3 years to discover all hidden Sony Ericsson phones in a conference room.
Conclusion:
With the advancement of digital convergence on M-commerce, usuage of bluetooth in connecting different devices is going to be significant. But to make communication more secure advancement in the prospect of security must not be neglected.

Wednesday, December 8, 2010

Oracle 11G

Oracle Database 11g, building on Oracle's unique ability to deliver Grid Computing, gives Oracle customers the agility to respond faster to changing business conditions, gain competitive advantage through technology innovation, and reduce costs.

With Oracle Database 11g you can:

* Adopt new technology faster with Real Application Testing
* Manage more data for less with advanced compression and partitioning
* Simplify systems by storing all your data in the Oracle Database with Oracle SecureFiles
* Maximize the ROI of disaster recovery resources with Oracle Active Data Guard
* Free critical personnel for strategic tasks with management automation
* And much, much more...

Database Manageability

Oracle Database 11g is the next generation self-managing database that helps businesses lower their IT operational costs while providing maximum performance and availability. This self-managing database automatically monitors, diagnoses and tunes itself. The Oracle Database 11g manageability features allow DBAs to become more productive, and helps their organizations reduce management costs and scale to manage the Enterprise Computing Grid.

Real Application Testing

System changes, such as hardware and software upgrades, configuration changes, etc., are essential for businesses to maintain their competitive edge as well as for compliance and security purposes. Oracle Real Application Testing helps you fully assess the effect of such system changes on real-world applications in test environments before deploying the change in production. Oracle Real Application Testing consists of two features, Database Replay and SQL Performance Analyzer. Together they enable enterprises to rapidly adopt new technologies that add value to the business while minimizing risk.

Some of the features referenced below are part of separately licensed Diagnostic Pack, Tuning Pack and Real Application Testing Option. Please refer to the Oracle Database Licensing Documentation for more details.

Optimized Storage Management

Oracle Database provides cost-effective, optimized storage management for all your data. Oracle minimizes costly I/O operations, reduces required storage capacity, and maximizes performance and utilization. Storage can be automatically managed, and optimizations transparently implemented.

Oracle Exadata

The HP Oracle Exadata Storage Server is a storage product highly optimized for use with the Oracle database. It provides database aware storage services, such as the ability to offload database processing from the database server to storage, and provides this while being transparent to SQL processing and your database applications. Exadata storage delivers dramatic performance improvements, with unlimited I/O scalability, is simple to use and manage, and delivers mission-critical availability and reliability to your enterprise. HP Oracle Exadata Storage Server Technical White Paper

ILM

Oracle Database 11g provides the ideal environment for implementing your ILM solution, because it offers a cost-effective solution, that is secure, transparent to the application and achieves all of this without compromising performance. Oracle provides an ILM assistant to help define lifecycle policies, manage data movement and report on benefits. Implementing ILM using Oracle Database 11g

Partitioning

Partitioning can improve the performance of certain queries or maintenance operations by an order of magnitude. Moreover, partitioning can greatly reduce the total cost of data ownership, using a "tiered archiving" approach of keeping older relevant information still online on low cost storage devices. Oracle Partitioning White Paper

Advanced Compression

Compression significantly reduces the storage footprint of databases through compression of structured data (numbers, characters), unstructured data (documents, spreadsheets, XML and other files) and backup data (RMAN backups and Data Pump exports). It also improves memory efficiency and provides I/O benefits, thereby maintaining or improving performance. Oracle Advanced Compression supports all database operations, including DML operations. Oracle Advanced Compression White Paper

Automatic Storage Management

Automatic Storage Management (ASM) provides a vertically integrated file system and volume manager, purpose-built for Oracle database files. ASM saves DBAs time, provides optimal performance, and allows for online storage growth and migration. Oracle Automatic Storage Management White Paper






Secure Files - The Next Generation Unstructured Data Management

Secure Files is a new feature in Oracle Database 11g that offers best solution for storing file content such as images, audio, video, PDFs, spreadsheets etc. Traditionally, relational data is stored in a database while unstructured data is stored as files in file systems. Secure Files is a major paradigm shift with the choice of files storage. Secure Files is specifically engineered to deliver high performance for file data comparable to that of traditional file systems while retaining the advantages of the Oracle database. Secure Files offers the "best-of-both-worlds" architecture from both the database and file system worlds for storing unstructured data.


Secure Files is a completely new architecture inside the Oracle Database 11g for handling file or unstructured data. It features entirely new disk formats, network protocol, space management, redo and undo formats, buffer caching, and intelligent I/O subsystem. SecureFiles represents the core infrastructure for managing unstructured content inside the Oracle database. With SecureFiles, Oracle has perfected the use of the database for storage of all enterprise data.

Key Technical Advantages

Secure Files is designed for high performance and includes advanced features typically found in high-end file systems.

* High Performance
* Deduplication
* Compression
* Encryption
* Advanced Logging

In addition to these advanced file system features, SecureFiles can take advantage of several advanced Oracle Database capabilities such as:

* Transactions, Read Consistency, Flashback
* 100% Backward Compatibility with LOB Interfaces
* Readable Standby, Consistent Backup, Point in Time Recovery
* Fine Grained Auditing, Label Security
* XML indexing, XML Queries, XPath
* Real Application Clusters
* Automatic Storage Management
* Partitioning and ILM

Data stored in SecureFiles can be accessed through both database and file system clients. SecureFiles interfaces are completely backward with LOB interfaces. More information on using SecureFiles and Migration can be found here.

Oracle Database High Availability

Databases and the Internet have enabled worldwide collaboration and information sharing by extending the reach of database applications throughout organizations and communities. Both small businesses and global enterprises have users all over the world who require access to data 24 hours a day. Without this data access, revenue and customers can be lost, penalties can be owed, and bad press can have a lasting effect on customers and a company's reputation. Building a high availability IT infrastructure is critical to the success and well being of all enterprises in today's fast moving economy.

One of the challenges in designing an HA solution is examining and addressing all the possible causes of downtime. It is important to consider causes of both unplanned and planned downtime when designing a fault tolerant and resilient IT infrastructure. Unplanned downtime is primarily the result of computer failures or data failures. Planned downtime is primarily due to data changes or system changes that must be applied to the production system.

Click on the following for a further explanation of what the critical high availability solution areas are, why they are important considerations for building a highly available solution, and what are Oracle's offerings in each of these areas.

* High Availability
* Backup & Recovery
* Disaster Recovery
* Storage Management
* Continuous Operations
* Best Practices for High Availability -- Maximum Availability Architecture (MAA)

High Availability Collateral - Oracle Database 11g
Overview: Oracle Database 11g High Availability
Technical White Paper: Oracle Data Guard 11g
Technical White Paper: Oracle Database 11g Data Repair Technologies - includes Data Recovery Advisor, RMAN, Flashback and Oracle Secure Backup

Technical White Paper: Oracle Streams 11g


Complete Oracle Database 11g and 10g High Availability Collateral Library
High Availability Backup & Recovery
Data Storage Disaster Recovery
Oracle OpenWorld Papers & Presentations Webcast Replays, Online Demonstrations

Intrusion Detection System

Classification of intrusion detection systems
Primarily, an IDS is concerned with the detection of hostile actions. This network security tool uses either of two main techniques (described in more detail below). The first one, anomaly detection, explores issues in intrusion detection associated with deviations from normal system or user behavior. The second employs signature detection to discriminate between anomaly or attack patterns (signatures) and known intrusion detection signatures. Both methods have their distinct advantages and disadvantages as well as suitable application areas of intrusion detection.
When considering the area being the source of data used for intrusion detection, another classification of intrusion detection systems can be used in terms of the type of the protected system. There is a family of IDS tools that use information derived from a single host (system) — host based IDS (HIDS) and those IDSs that exploit information obtained from a whole segment of a local network (network based IDS, i.e. NIDS).
Two primary types of HIDS can be distinguished [Dor02b]:
• Systems that monitor incoming connection attempts (RealSecure Agent, PortSentry). These examine host-based incoming and outgoing network connections. These are particularly related to the unauthorized connection attempts to TCP or UDP ports and can also detect incoming portscans.
• Systems that examine network traffic (packets) that attempts to access the host. These systems protect the host by intercepting suspicious packets and looking for aberrant payloads (packet inspection).
• Systems that monitor login activity onto the networking layer of their protected host (HostSentry). Their role is to monitor log-in and log-out attempts, looking for unusual activity on a system occurring at unexpected times, particular network locations or detecting multiple login attempts (particularly failed ones).
• Systems that monitor actions of a super-user (root) who has the highest privileges (LogCheck). IDS scans for unusual activity, increased super-user activity or actions performed at particular times, etc.
• Systems that monitor file system integrity (Tripwire, AIDE). Tools that have this ability (integrity checker) allow the detection of any changes to the files that are critical for the operating system.
• Systems that monitor the system register state (Windows platform only). They are designed to detect any illegal changes in the system register and alert the system administrator to this fact.
• Kernel based intrusion detection systems [Els00]. These are especially prevalent within Linux (LIDS, OpenWall). These systems examine the state of key operating system files and streams, preventing buffer overflow, blocking unusual interprocess communications, preventing an intruder from attacking the system. In addition, they can block a part of the actions undertaken by the super-user (restricting privileges).
The HIDS reside on a particular computer and provide protection for a specific computer system. They are not only equipped with system monitoring facilities but also include other modules of a typical IDS, for example the response module (see Part I of the cycle).
HIDS products such as Snort, Dragon Squire, Emerald eXpert-BSM, NFR HID, Intruder Alert all perform this type of monitoring.
The network-based type of IDS (NIDS) produces data about local network usage. The NIDS reassemble and analyze all network packets that reach the network interface card operating in promiscuous mode. They do not only deal with packets going to a specific host – since all the machines in a network segment benefit from the protection of the NIDS. Network-based IDS can also be installed on active network elements, for example on routers.
Since intrusion detection (for example flood-type attack) employs statistical data on the network load, a certain type of dedicated NIDS can be separately distinguished, for example, those that monitor the traffic (Novell Analyzer, Microsoft Network Monitor). These capture all packets that they see on the network segment without analyzing them and just focusing on creating network traffic statistics.
Typical network-based intrusion systems are: Cisco Secure IDS (formerly NetRanger), Hogwash, Dragon, E-Trust IDS.
Certain authors (for example [Int02]) consider a blend of HIDS and NIDS as a separate class of a Network Node IDS (NNIDS) which has its agents deployed on every host within the network being protected (typical NIDS uses network agents to monitor whole LAN segments). In fact, a NNIDS operates very much like a hybrid per-host NIDS since a single agent usually processes the network traffic directed to the host it runs upon (an “every man for himself approach”). The main reason for introducing such hybrid IDS was the need to work online with encrypted networks and their data destined to the single host (only the source and destination can see decrypted network traffic). Most large commercially offered intrusion detection systems are shim-hybrid ones, i.e. those that merge the strengths of HIDS and NIDS in a unique concept.
The HIDS that look only at their host traffic can easily detect local-to-local attacks or local-to-root attacks, since they have a clear concept of locally available information, for example they can exploit user IDS. Also, anomaly detection tools feature a better coverage of internal problems since their detection ability is based on the normal behavior patterns of the user.
The IDS can operate as standalone, centralized applications or integrated applications that create a distributed system. The latter have a particular architecture with autonomous agents that are able to take preemptive and reactive measures and even to move over the network. The AAFID architecture of these systems has been presented in Part I of the cycle.
One may categorize intrusion detection systems in terms of behavior i.e., they may be passive (those that simply generate alerts and log network packets). They may also be active which means that they detect and respond to attacks, attempt to patch software holes before getting hacked or act proactively by logging out potential intruders, or blocking services. This is discussed in Part III of the cycle.



Classification of intrusion detection systems


Audit trail processing vs. on-the-fly processing
Intrusion detection systems can run on either a continuous or periodic feed of information (Real-time IDS and Interval-based IDS respectively) and hence they use two different intrusion detection approaches. Audit trail analysis is the prevalent method used by periodically operated systems. In contrast, the IDS deployable in real-time environments are designed for online monitoring and analyzing system events and user actions.
Audit Trail Processing
There are many issues related to audit trail (event log) processing. Storing audit trail reports in a single file must be avoided since intruders may use this feature to make unwanted changes. It is far better to keep a certain number of event log copies spread over the network, though it would imply adding some overheads to both the system and network.
Further, from the functionality point of view, recording every event possible means a noticeable consumption of system resources (both the local system and network involved). Log compression, instead, would increase the system load. Specifying which events are to be audited is difficult because certain types of attacks may pass undetected. It is also difficult to predict how large audit files can be – through experience one can only make a rough estimate. Also, an appropriate setting of a storage period for current audit files is not a straightforward task. In general, this depends on a specific IDS solution and its correlation engine. Certainly, archive files should be stored as copies for retrieval analysis purposes.
Log processing systems are vulnerable to Denial of Service (DoS) attacks that render audit mechanisms unreliable and unusable by overflowing the system’s free space.
The main reasons for having an audit function include:
• detection of attack manifestations for post-mortem analysis;
• detection of recurring intrusion activity (yielding unauthorized privileges, abuse, attack attempts);
• identification of successful intruders;
• identification of own system weaknesses;
• Development of access and user signatures and definition of network traffic rules that are important for anomaly detection-based Intrusion Detection Systems.
• Repelling potential intruders by simply making them aware of the existence of the auditing means.
• The audit reporting may provide a form of defense for an innocent user, for example possible involved in hacking attempts.
• The log event-based IDS method needs to have the following capabilities:
• Allowing of parameterization for easy recording of system event logs and user activities,
• Providing an option of self-disengagement of logging mechanisms in the event of insufficient space or DoS attacks;
• Audit trail processing using additional mechanisms (aggregation, artificial intelligence, and data mining) because of large file sizes,
• A reasonable minimum system resource consumption for auditing purposes.
Examples of intrusion detection systems that use audit trail processing are:
• SecureView that checks logs produced by CheckPoint Firewall-1
• CMDS (Computer Misuse Detection System). With its built-in expert system, it analyzes all event logs to recognize abnormal user behavior.
• ACID (Analysis Console for Intrusion Databases) — is a PHP-based analysis engine to search and process a database of incidents generated by various security tools such as IDSs, firewalls and network traffic analyzers. ACID has a user query builder, which can analyze packets down to their payload, in order to find identical alerts among databases, which match certain criteria. It can also manage alerts and generate a variety of statistics.
On-the fly processing
With on the fly processing, an IDS performs online verification of system events. Generally, a stream of network packets is constantly monitored constantly. With this type of processing, intrusion detection uses the knowledge of current activities over the network to sense possible attack attempts (it does not look for successful attacks in the past).
Given the computation complexity, the algorithms that are used here are limited to quick and efficient procedures that are often algorithmically simple. This is due to a compromise between the main requisite – attack detection capability and the complexity of data processing mechanisms used in the detection itself.
At the same time, construction of an on-the-fly processing IDS tool requires a large amount of RAM (buffers) since no data storage is used. Therefore, such an IDS may sometime miss packets, because realistic processing of too many packets is not available.
The amount of data collected by the detector is small since it views only buffer contents. Hence, only small portions of information can be analyzed for searching certain values or sequences.
The main method used in real-time detecting is simply looking for character strings in the transport layer packets (3. and 4.), particularly in their headers. This may be done by monitoring IP addresses that initiate connections, or by checking for misappropriate TCP/IP flag combination (to capture packets that do not match known standards)[Fre01]. An example of packet pathology is when both the source and destination port addresses are set to 21. This is not compliant with FTP specifications since the source port number must be greater than 1024. Another example may be 0 Value Service-type, a packet with SYN and FIN flags both set, mismatch in numbering of sequences or acknowledgements, ACK value set to a non-zero number with the ACK flag not set, etc.
As a contrast with standard inspection methods, only selective packets in a data stream get inspected, and the inspection process only looks for "state" information, such as whether a packet contains malicious code.
A somewhat different method is applied in the application layer analysis (FTP, POP3, HTTP, etc.). Application-based IDS employ so-called standard packet inspection to analyze the TCP packet payload (headers are excluded). With this method, only selective, correlated packets in a data stream get examined and the inspection process looks for information about whether a packet matches typical packets (commands) of a given protocol. Thus, POP3 denial of service vulnerability is exploited by saturating POP3 server with multiple requests to execute a command. Here, the attack signature is developed by the number of commands sent by a given system and by establishing the alarm threshold. The method assumes that anomalies found in packet inspection, checking of packet size and threshold values are manifestations of a denial of service attack, also at the transport layer, for example Ping of Death attack. Another example of standard packet inspection IDSs include detecting email viruses before they get to email boxes by looking for matching email titles or attachment names. One may also search for malicious code which may compromise the system if it is attacked by, for example, buffer overflow exploits looking for signatures that monitor the user session status to disallow, for example, listing of directory structure on a FTP server before a successful user login [Dor02a). A drawback of the high layer analysis approach lies in the fact that it is time-consuming and operating environment-dependent (application layer protocols that vary from operating system to operating system).
The real-time based IDSs offer the following advantages:
• they excel at detecting attacks in progress and even responding to (blocking) them;
• the ability to cover network-inherent security holes associated with vulnerability to many types of attacks, particularly DoS, which cannot be detected using a common audit trail analysis approach—network traffic analysis is needed here;
• the system resources are less consumed than in the case of audit trail processing.
Disadvantages include:
• Source identification is accomplished based on the network address derived from the packet (not, for example, with using the network ID[1]). The source address may be spoofed, making attacks harder to trace and respond to automatically.
• That they cannot handle encrypted packets thereby not providing essential information required for intrusion detection.
• Since the analytical module uses a limited portion of source information (buffer content only), its detection capability is limited.
• A continuous scanning of network traffic reduces the network throughout the segment on which the IDS sits. This is of particular importance when an IDS tool is deployed near the firewall.

Anomaly vs. signature detection
Intrusion detection systems must be capable of distinguishing between normal (not security-critical) and abnormal user activities, to discover malicious attempts in time. However translating user behaviors (or a complete user-system session) in a consistent security-related decision is often not that simple — many behavior patterns are unpredictable and unclear (Fig. 2). In order to classify actions, intrusion detection systems take advantage of the anomaly detection approach, sometimes referred to as behavior based [Deb99] or attack signatures i.e. a descriptive material on known abnormal behavior (signature detection)], [Axe00, Jon00, Kum95] also called knowledge based.


Behavior of the user in the system [Jon00



Normal behavior patterns — anomaly detection
Normal behavior patterns are useful in predicting both user and system behavior. Here, anomaly detectors construct profiles that represent normal usage and then use current behavior data to detect a possible mismatch between profiles and recognize possible attack attempts.
In order to match event profiles, the system is required to produce initial user profiles to train the system with regard to legitimate user behaviors. There is a problem associated with profiling: when the system is allowed to “learn” on its own, experienced intruders (or users) can train the system to the point where previously intrusive behavior becomes normal behavior. An inappropriate profile will be able to detect all possible intrusive activities. Furthermore, there is an obvious need for profile updating and system training which is a difficult and time-consuming task.
Given a set of normal behavior profiles, everything that does not match the stored profile is considered to be a suspicious action. Hence, these systems are characterized by very high detection efficiency (they are able to recognize many attacks that are new to the system), but their tendency to generate false alarms is generally a problem.
Advantages of this anomaly detection method are: possibility of detection of novel attacks as intrusions; anomalies are recognized without getting inside their causes and characteristics; less dependence of IDSs on operating environment (as compared with attack signature-based systems); ability to detect abuse of user privileges.
The biggest disadvantages of this method are:
• A substantial false alarm rate. System usage is not monitored during the profile construction and training phases. Hence, all user activities skipped during these phases will be illegitimate.
• User behaviors can vary with time, thereby requiring a constant update of the normal behavior profile database (this may imply the need to close the system from time to time and may also be associated with greater false alarm rates).
• The necessity of training the system for changing behavior makes a system immune to anomalies detected during the training phase (false negative).
Misbehavior signatures — signature detection
Systems possessing information on abnormal, unsafe behavior (attack signature-based systems) are often used in real-time intrusion detection systems (because of their low computational complexity).
The misbehavior signatures fall into two categories:
• Attack signatures – they describe action patterns that may pose a security threat. Typically, they are presented as a time-dependent relationship between series of activities that may be interlaced with neutral ones.
• Selected text strings – signatures to match text strings which look for suspicious action (for example – calling /etc/passwd).
Any action that is not clearly considered prohibited is allowed. Hence, their accuracy is very high (low number of false alarms). Typically, they do not achieve completeness and are not immune to novel attacks.
There are two main approaches associated with signature detection (already mentioned in the section describing real-time detectors):
• Verification of the pathology of lower layer packets— many types of attacks (Ping of Death or TCP Stealth Scanning) exploit flaws in IP, TCP, UDP or ICMP packets. With a very simple verification of flags set on specific packets it is possible to determine whether a packet is legitimate or not. Difficulties may be encountered with possible packet fragmentation and the need for re-assembly. Similarly, some problems may be associated with the TCP/IP layer of the system being protected. It is well known that hackers use packet fragmentation to bypass many IDS tools [Dor02a].
• Verification of application layer protocols — many types of attacks (WinNuke) exploit programming flaws, for example, out-of-band data sent to an established network connection. In order to effectively detect such attacks, the IDS must have implemented many application layer protocols.
The signature detection methods have the following advantages: very low false alarm rate, simple algorithms, easy creation of attack signature databases, easy implementation and typically minimal system resource usage.
Some disadvantages:
• Difficulties in updating information on new types of attacks (when maintaining the attack signature database updated as appropriate).
• They are inherently unable to detect unknown, novel attacks. A continuous update of the attack signature database for correlation is a must.
• Maintenance of an IDS is necessarily connected with analyzing and patching of security holes, which is a time-consuming process.
• The attack knowledge is operating environment–dependent, so misbehavior signature-based intrusion detection systems must be configured in strict compliance with the operating system (version, platform, applications used etc.)
• They seemed to have difficulty handling internal attacks. Typically, abuse of legitimate user privileges is not sensed by the system as a malicious activity (because of the lack of information on user privileges and attack signature structure).
Commercially offered IDS products often use the signature detection method for two reasons. Firstly, it is easier for a given signature to be associated with a known attack abstraction and to assign a name to an attack, e.g. Ping of Death, which is used worldwide (a suitable signature can be attached to the installation version), than the normal behavior of a certain John Brown in an organization. Secondly, the attack signature database must be updated regularly (by adding signatures of newly discovered attacks and exploits), which may create a fairly good source of income for vendors of IDS tools. A database update is at the same time a less cumbersome task than that associated with the change of typical user behavior profiles. In the latter case, a temporary closing down of the system may be required, what cannot be tolerated on certain applications.
The example below presents an attack signature taken from the Snort program that detects ping ICMP packets larger than 800 bytes, incoming from an external network and associated with any port:
alert icmp $EXTERNAL_NET any -> $HOME_NET any (msg:"MISC large ICMP"; dsize: >800; reference:arachnids,246; classtype:bad-unknown; sid:499;)
Parameter Pattern Matching
The third method of intrusion detection is subtler than the two mentioned earlier. It reasons on the fact, that system administrators monitor various systems and network attributes (not necessarily targeting security issues). As a rule, information obtained in this way has a constant specific environment. This method involves the use of day-to-day operational experience of the administrators as the basis for detecting anomalies. It can be considered as a special case of Normal Profile Methods. The difference lies in the fact that a profile here is part of the human knowledge.
This is a very powerful technique, since it allows intrusions based on unknown type attacks. The system operator can detect subtle changes that are not obvious to the operator himself. Its inherent disadvantage is connected with the fact that humans can process and hence understand only a limited portion of information at a time, what means that certain attacks may pass undetected.
Data processing techniques used in intrusion detection systems
Depending on the type of approach taken in intrusion detection, various processing mechanisms (techniques) are employed for data that is to reach an IDS[2]. Below, several systems are described briefly:
• Expert systems, these work on a previously defined set of rules describing an attack. All security related events incorporated in an audit trail are translated in terms of if-then-else rules. Examples are Wisdom & Sense and ComputerWatch (developed at AT&T).
• Signature analysis Similarly to expert System approach, this method is based on the attack knowledge. They transform the semantic description of an attack into the appropriate audit trail format. Thus, attack signatures can be found in logs or input data streams in a straightforward way. An attack scenario can be described, for example, as a sequence of audit events that a given attack generates or patterns of searchable data that are captured in the audit trail. This method uses abstract equivalents of audit trail data. Detection is accomplished by using common text string matching mechanisms. Typically, it is a very powerful technique and as such very often employed in commercial systems (for example Stalker, Real Secure, NetRanger, Emerald eXpert-BSM).
• Colored Petri Nets The Colored Petri Nets approach is often used to generalize attacks from expert knowledge bases and to represent attacks graphically. Purdue University’s IDIOT system uses Colored Petri Nets. With this technique, it is easy for system administrators to add new signatures to the system. However, matching a complex signature to the audit trail data may be time-consuming. The technique is not used in commercial systems.
• State-transition analysis Here, an attack is described with a set of goals and transitions that must be achieved by an intruder to compromise a system. Transitions are represented on state-transition diagrams.
• Statistical analysis approach This is a frequently used method (for example SECURENET). The user or system behavior (set of attributes) is measured by a number of variables over time. Examples of such variables are: user login, logout, number of files accessed in a period of time, usage of disk space, memory, CPU etc. The frequency of updating can vary from a few minutes to, for example, one month. The system stores mean values for each variable used for detecting exceeds that of a predefined threshold. Yet, this simple approach was unable to match a typical user behavior model. Approaches that relied on matching individual user profiles with aggregated group variables also failed to be efficient. Therefore, a more sophisticated model of user behavior has been developed using short- and long-term user profiles. These profiles are regularly updated to keep up with the changes in user behaviors. Statistical methods are often used in implementations of normal user behavior profile-based Intrusion Detection Systems.
• Neural Networks Neural networks use their learning algorithms to learn about the relationship between input and output vectors and to generalize them to extract new input/output relationships. With the neural network approach to intrusion detection, the main purpose is to learn the behavior of actors in the system (e.g., users, daemons). It is known that statistical methods partially equate neural networks. The advantage of using neural networks over statistics resides in having a simple way to express nonlinear relationships between variables, and in learning about relationships automatically. Experiments were carried out with neural network prediction of user behaviors. From the results it has been found that the behavior of UNIX super-users (roots) is predictable (because of very regular functioning of automatic system processes). With few exceptions, behavior of most other users is also predictable. Neural networks are still a computationally intensive technique, and are not widely used in the intrusion detection community.
• User intention identification This technique (that to our knowledge has only been used in the SECURENET project) models normal behavior of users by the set of high-level tasks they have to perform on the system (in relation to the users’ functions). These tasks are taken as series of actions, which in turn are matched to the appropriate audit data. The analyzer keeps a set of tasks that are acceptable for each user. Whenever a mismatch is encountered, an alarm is produced.
• Computer immunology Analogies with immunology has lead to the development of a technique that constructs a model of normal behavior of UNIX network services, rather than that of individual users. This model consists of short sequences of system calls made by the processes. Attacks that exploit flaws in the application code are very likely to take unusual execution paths. First, a set of reference audit data is collected which represents the appropriate behavior of services, then the knowledge base is added with all the known “good” sequences of system calls. These patterns are then used for continuous monitoring of system calls to check whether the sequence generated is listed in the knowledge base; if not — an alarm is generated. This technique has a potentially very low false alarm rate provided that the knowledge base is fairly complete. Its drawback is the inability to detect errors in the configuration of network services. Whenever an attacker uses legitimate actions on the system to gain unauthorized access, no alarm is generated.
• Machine learning This is an artificial intelligence technique that stores the user-input stream of commands in a vectorial form and is used as a reference of normal user behavior profile. Profiles are then grouped in a library of user commands having certain common characteristics [Mar01].
• Data mining generally refers to a set of techniques that use the process of extracting previously unknown but potentially useful data from large stores of data. Data mining method excels at processing large system logs (audit data). However they are less useful for stream analysis of network traffic. One of the fundamental data mining techniques used in intrusion detection is associated with decision trees [Fan01]. Decision tree models allow one to detect anomalies in large databases. Another technique refers to segmentation, allowing extraction of patterns of unknown attacks [Lee00b]. This is done by matching patterns extracted from a simple audit set with those referred to warehoused unknown attacks [Lee00a]. A typical data mining technique is associated with finding association rules. It allows one to extract previously unknown knowledge on new attacks [Bas00] or built on normal behavior patterns. Anomaly detection often generates false alarms. With data mining it is easy to correlate data related to alarms with mined audit data, thereby considerably reducing the rate of false alarms [Man00].

social media buttons