Sunday, September 7, 2008

CSC As i See it 009

How to generate data in the tables using the following database
Oracle
Firebird
SQLite
FileMaker
1. Firebird
Start database then click Create Database Wizard.
To run the Create Database Wizard, select the Database Create New Database... main menu item or click the Create New Database button on the main toolbar.
The first wizard step allows you to set a name of the new database.
Database Editor allows you to browse all the database objects and its main properties. There is possible to create, edit and drop database sub-items.
2. Oracle
Start Oracle Database Maestro. Select the Create Database Profiles main menu item to create profiles for the existing databases.
After you have created the required database profiles Create Database Profile Wizard they appear in the explorer tree on the left. Now you can establish connection to the database. If connection succeeds, the database node expands displaying the tree of its objects.
New tables are created within Create Table Wizard. In order to run the wizard you should either

select the Object Create Database Object... main menu item;

select the Table icon in the Create Database Object dialog
3. FileMaker
Start the software.
You select from a palette of files, fields, and graphic tools to create a structure.
The first step in designing a database is choosing what information you want to store in a database.
The second step is to create fields in your file. This requires deciding what types of fields you want. For example, fields can store text, numbers, pictures, or other kinds of data.
The third step is to create layouts. Layouts are computer screens that present information to you. Usually a database provides graphic tools that can enhance the appearance of the layout. For example, you can add boxes, lines, and fill patterns by using the tool palette.
4. SQLite
Start the Database.
Just click the Create New Database button from the main menu item. This will lead you to the Wizard.
Once you have accomplish filling up some informations in the wizard window,you have now your database.
in making tables, you must first connect to a database.
Once you have connected, just right-click the database item in the explorer tree. Then, click "Create New Table".

Monday, August 11, 2008

CSC as i see it 008

Edgar F. Codd first proposed the process of normalization and what came to be known as the 1st normal form:

There is, in fact, a very simple elimination[1] procedure which we shall call normalization. Through decomposition non-simple domains are replaced by "domains whose elements are atomic (non-decomposable) values."

Edgar F. Codd, A Relational Model of Data for Large Shared Data Banks[2]

In his paper, Edgar F. Codd used the term "non-simple" domains to describe a heterogeneous data structure, but later researchers would refer to such a structure as an abstract data type.

Edgar Frank "Ted" Codd (August 23, 1923April 18, 2003) was a British computer scientist who, while working for IBM, invented the relational model for database management, the theoretical basis for relational databases. He made other valuable contributions to computer science, but the relational model, a very influential general theory of data management, remains his most memorable achievement.

Edgar Frank Codd was born on the Isle of Portland, in England. After attending Poole Grammar School, he studied mathematics and chemistry at Exeter College, Oxford, before serving as a pilot in the Royal Air Force during the Second World War. In 1948, he moved to New York to work for IBM as a mathematical programmer. In 1953, angered by Senator Joseph McCarthy, Codd moved to Ottawa, Canada. A decade later he returned to the U.S. and received his doctorate in computer science from the University of Michigan in Ann Arbor. Two years later he moved to San Jose, California to work at IBM's Almaden Research Center, where he continued to work until the 1980s. During the 1990s, his health deteriorated and he ceased work. [1]

Codd received the Turing Award in 1981 and in 1994 he was inducted as a Fellow of the Association for Computing Machinery[2].

Codd died of heart failure at his home in Williams Island, Florida at the age of 79 on Friday, April 18, 2003.[3]

In the 1960s and 1970s he worked out his theories of data arrangement, issuing his paper "A Relational Model of Data for Large Shared Data Banks" in 1970, after an internal IBM paper one year earlier.[4] To his disappointment, IBM proved slow to exploit his suggestions until commercial rivals started implementing them.

Initially, IBM refused to implement the relational model in order to preserve revenue from IMS/DB. Codd then showed IBM customers the potential of the implementation of its model, and they in turn pressured IBM. Then IBM included in its Future Systems project a System R subproject — but put in charge of it developers who were not thoroughly familiar with Codd's ideas, and isolated the team from Codd. As a result, they did not use Codd's own Alpha language but created a non-relational one, SEQUEL. Even so, SEQUEL was so superior to pre-relational systems that it was copied, based on pre-launch papers presented at conferences, by Larry Ellison in his Oracle Database, which actually reached market before SQL/DS — due to the then-already proprietary status of the original moniker, SEQUEL had been renamed SQL.

Codd continued to develop and extend his relational model, sometimes in collaboration with Chris Date. One of the normalized forms, the Boyce-Codd Normal Form, is named after him.

As the relational model started to become fashionable in the early 1980s, Codd fought a sometimes bitter campaign to prevent the term being misused by database vendors who had merely added a relational veneer to older technology. As part of this campaign, he published his 12 rules to define what constituted a relational database. His campaign extended to the SQL language, which he regarded as an incorrect implementation of the theory. This made his position in IBM increasingly difficult, so he left to form his own consulting company with Chris Date and others.

Edgar Codd coined the term OLAP and wrote the twelve laws of online analytical processing, although these were never truly accepted after it came out that his white paper on the subject was paid for by a software vendor. His last work, a book named The Relational Model for Database Management, version 2, was not so well received[citation needed]. On the other hand, his extension of the ideas in the relational model to cover database design issues, in his RM/T, have proved important[citation needed] . Codd also contributed knowledge in the area of cellular automata.

In 2004, SIGMOD renamed its highest prize, SIGMOD Innovations Award, in his honor.



Thursday, July 17, 2008

csc as i see it 007

File
A computer file is a block of arbitrary information, or resource for storing information, which is available to a computer program and is usually based on some kind of durable storage. A file is durable in the sense that it remains available for programs to use after the current program has finished. Computer files can be considered as the modern counterpart of paper documents which traditionally were kept in offices' and libraries' files, which are the source of the term.

DataField

Datafields - a string containing all field and subfield data in the record.

Record
In computer science, a record type is a type whose values are records, i.e. aggregates of several items of possibly different types. The items being aggregated are called fields (or members) and are usually identified or indexed by field labels, names identifying the fields.
Record types are mathematically equivalent to
product types but they usually behave differently with respect to typing: in most if not all typed programming languages, record types are bound to names and two record types are equal for the type system if and only if they have the name. This contrasts with product types where the equality is the equality of each of the type components individually.

Folder
A file folder (US usage) or folder (British and Australian usage) is a kind of folder that holds loose papers together for organization and protection. File folders usually consist of a sheet of heavy paper stock or other thin, but stiff, material which is folded in half, and are used to keep paper documents. They are often used in conjunction with a filing cabinet for storage. File folders can easily be purchased at office supply stores. In the UK, one of the oldest and best known filing companies is Railex. File folders are usually labeled based on what's inside them. Folders can be labeled directly on the tab with a pen or pencil. Others write on adhesive labels that are placed on the tabs. There are also electronic labelmakers that can be used to make the labels.

Database
A database is a structured collection of records or data. A computer database relies upon software to organize the storage of data. The software models the database structure in what are known as database models. The model in most common use today is the relational model. Other models such as the hierarchical model and the network model use a more explicit representation of relationships (see below for explanation of the various database models).

Wednesday, July 16, 2008

CSC As I See It 006

Flat models
consists of a single, two-dimensional array of data elements, where all members of a given column are assumed to be similar values, and all members of a row are assumed to be related to one another. For instance, columns for name and password that might be used as a part of a system security database. Each row would have the specific password associated with an individual user. Columns of the table often have a type associated with them, defining them as character data, date or time information, integers, or floating point numbers. And Hierarchical model is organized into a tree-like structure, implying a single upward link in each record to describe the nesting, and a sort field to keep the records in a particular order in each same-level list.
Hierarchical structures
were widely used in the early mainframe database management systems, such as the Information Management System (IMS) by IBM, and now describe the structure of XML documents. This structure is very efficient to describe many relationships in the real world; recipes, table of contents, ordering of paragraphs/verses, any nested and sorted information.The hierarchical structure is inefficient for certain database operations when a full path (as opposed to upward link and sort field) is not also included for each record.
Network model
organizes data using two fundamental constructs, called records and sets. Records contain fields (which may be organized hierarchically, as in the programming language COBOL). Sets (not to be confused with mathematical sets) define one-to-many relationships between records: one owner, many members. A record may be an owner in any number of sets, and a member in any number of sets.The network model is able to represent redundancy in data more efficiently than is the hierarchical model.The operations of the network model are navigational in style: a program maintains a current position, and navigates from one record to another by following the relationships in which the record participates.
Relational model
is a way to make database management systems more independent of any particular application. It is a mathematical model defined in terms of predicate logic and set theory.The basic data structure of the relational model is the table, where information about a particular entity (say, an employee) is represented in columns and rows (also called tuples). Thus, the "relation" in "relational database" refers to the various tables in the database; a relation is a set of tuples. The columns enumerate the various attributes of the entity (the employee's name, address or phone number, for example), and a row is an actual instance of the entity (a specific employee) that is represented by the relation.One of the strengths of the relational model is that, in principle, any value occurring in two different records (belonging to the same table or to different tables), implies a relationship among those two records.
Dimensional model
is a specialized adaptation of the relational model used to represent data in data warehouses in a way that data can be easily summarized using OLAP queries. In the dimensional model, a database consists of a single large table of facts that are described using dimensions and measures. A dimension provides the context of a fact (such as who participated, when and where it happened, and its type) and is used in queries to group related facts together. A measure is a quantity describing the fact, such as revenue. is often implemented on top of the relational model using a star schema, consisting of one table containing the facts and surrounding tables containing the dimensions. Particularly complicated dimensions might be represented using multiple tables, resulting in a snowflake schema.
Object database models
These databases attempt to bring the database world and the application programming world closer together, in particular by ensuring that the database uses the same type system as the application program. This aims to avoid the overhead (sometimes referred to as the impedance mismatch) of converting information between its representation in the database (for example as rows in tables) and its representation in the application program (typically as objects). At the same time, object databases attempt to introduce the key ideas of object programming, such as encapsulation and polymorphismm, into the world of databases.A variety of these ways have been tried for storing objects in a database. Some products have approached the problem from the application programming end, by making the objects manipulated by the program persistent. This also typically requires the addition of some kind of query language, since conventional programming languages do not have the ability to find objects based on their information content. Others have attacked the problem from the database end, by defining an object-oriented data model for the database, and defining a database programming language that allows full programming capabilities as well as traditional query facilities.

Thursday, July 3, 2008

CSC as i see it 005

10 DBMS software

Additional types of software applications have been used in the past and may be still in use on older, legacy systems at various organizations around the world. However, these examples provide an overview of the most popular and most-widely used by IT departments. Some typical examples of DBMS include: Oracle, DB2, Microsoft Access, Microsoft SQL Server, PostgreSQL, MySQL, and FileMaker.

1. Oracle Corporation was started in California in 1977 by three gentlemen, Larry Ellison, Bob Miner and Ed Oates. The initial name of the company was Software Development Labs.

Oracle first began developing and selling its database software, however, since 1987 Oracle has expanded into project management software, CRM software, ERP software, business intelligence software and other software applications.

Oracle Corporation is still headquartered in Redwood City, California; Larry Ellison is currently the CEO.

2. IBM first announced their DB2 relational database management system in June, 1983. However, it actually did not go generally available until until almost 2 years later, in April, 1985. DB2 was actually known as Database 2 when initially released. Since its introduction it has been called simply DB2, DB2 for MVS/ESA, DB2 for OS/390, DB2 UDB for OS/390, DB2 UDB for z/OS, and currently DB2 9 for z/OS.

DB2 is regarded as a heavy-duty big-iron high-resilience program. In 1987 a version was made available for IBM’s OS/2 operating system. This was followed by AIX edition in 1993, HP-UX and Solaris flavor in 1994, Windows in 1995, and finally on Linux in 1999. Today DB2 is available on everything from IBM’s largest mainframes to some of the smallest hand held devices.

3. Microsoft Access is a database management system produced by Microsoft Corporation. Microsoft Access is part of the Microsoft office suite. The latest version is Microsoft Access 2007 and is shipped with Microsoft Office Professional or can be obtained as a single application.

Microsoft Access allows you to create JetSQL databases as well as embed forms and reports into them. In addition you can link to tables from other ODBC compliant data sources.

History

Microsoft Access appeared in 1993 to provide the first MS Windows database, linked to the other Microsoft Office products, which then formed part of Office 4.2. Prior to Access, there had been a number of Windows databases, Superbase being one of the main ones. Version 1.1 of MS Access was rudimentary but was a great competitor to Superbase and DB2, which were the only other robust databases around. The difference with MS Access was that it formed part of the MS Office Suite and was therefore able to be used directly with the other components, even though linking and embedding was fairly rudimentary then.

Microsoft released a version of MS Access to coincide with each release of MS Windows, more or less. The major releases were MS Access 2.0, MS Access 97, MS Access 2000, MS Access 2003 and, just recently MS Access 2007, to coincide with the new MS Windows Vista.

4. SQL Server is a DBMS created by Microsoft in 1989. It utilizes a type of SQL called T-SQL. It has become a popular DBMS and competes against Oracle and IBM DB2, in the database market. It was originally created by a company by the name of Sybase.

The latest version of the SQL Server is "SQL Server 2005", which is offered in 5 major editions (Express, Developer, Workgroup, Standard, and Enterprise). The Express Edition is a scaled down version that is offered for free by Microsoft. While SQL Server 2005 Developer Edition offers full functionality at a reasonable price, it is not allowed to be used in a product environment. For a product environment it is best to use either SQL Server 2005 Standard, SQL Server 2005 Workgroup or SQL Server 2005 Enterprise.

Early Years

In 1988, Microsoft released its first version of SQL Server. It was designed for the OS/2 platform and was developed jointly by Microsoft and Sybase along with Ashton-Tate. The first SQL Server version named SQL Server 1.0 for OS/2 with the final release around 1989. This original version was essentially the same as Sybase SQL Server 3.0 on UNIX, and VMS, etc.

During the early 1990s, Microsoft began to develop a new version of SQL Server for the NT platform. While it was under development, Microsoft decided that SQL Server should be tightly coupled with the NT operating system. In 1992, Microsoft assumed core responsibility for the future of SQL Server for NT. In 1993, Windows NT 3.1 and SQL Server 4.2 for NT were released.

Microsoft's philosophy of combining a high-performance database with an easy-to-use interface proved to be very successful. Microsoft quickly became the second most popular vendor of high-end relational database software.

Sybase/Microsoft Split

In 1994, Microsoft and Sybase formally ended their partnership. Sybase and Microsoft then parted ways and pursued their own design and marketing schemes. Microsoft kept exclusive rights to SQL Server and Sybase changed the name of its product to Adaptive Server Enterprise. In 1994 Microsoft's SQL Server dropped its Sybase copyright notices.

In 1995, Microsoft released version 6.0 of SQL Server. This release was a major rewrite of SQL Server's core technology. Version 6.0 substantially improved performance, provided built-in replication, and delivered centralized administration. In 1996, Microsoft released version 6.5 of SQL Server. This version brought significant enhancements to the existing technology and provided several new features.

Full DBMS

In 1997, Microsoft released version 6.5 Enterprise Edition. In 1998, Microsoft released version 7.0 of SQL Server, which was a complete rewrite of the database engine and also the first true GUI-based database server.

In 2000, Microsoft released SQL Server 2000. SQL Server version 2000 is Microsoft's most significant release of SQL Server to date. This version further builds upon the SQL Server 7.0 framework. The current version, Microsoft SQL Server 2005, was released in November of 2005. The launch took place alongside Visual Studio 2005 and BizTalk Server 2006.

5. PostgreSQL is an open source object relational database management system (RDBMS). It was and is continued to be created by developers and is not owned by any one company. PostgreSQL, or Postgres, is a powerful BSD licensed database.

PostgreSQL WebQuarters
PostgreSQL Documentation
PostgreSQL advantages

PostgreSQL originated as a database research project at The University of California Berkeley. After the research project completed in 1994, the code was cleaned up and released as Postgres95. A community began to build around the code, a global development team was formed and the PostgreSQL Project was born. The Project celebrated its 10th Anniversary in 1996.

PostgreSQL has matured into an enterprise-grade RDBMS with features that compete with commercial offerings. Some notable features are:

  • ANSI SQL 89, 92 and 99 syntax
  • 100% ACID compliant transactions
  • Support for joins, views, aggregations, and referential integrity
  • support for stored procedures and triggers written in C, SQL, PL/pgSQL, TCL, Perl, Python and Ruby
  • automatic locale support
  • choice of clients: psql at the command line, pgadmin3 from a GUI, and phpPgAdmin from a browser

Much supporting software has been written and can be found at PgFoundry. Popular software includes:

The PostgreSQL database runs on most Unix/Linux/BSD variants as well as on the Microsoft platform.

6. MySQL is an open source database system which is distributed under the GPL, LGPL and commercial licenses. It is part of the LAMP stack of technologies, as well as the lesser known WAMP stack [ Windows, Apache, MySQL, PHP ], and is the database server of choice for many open source web applications. MySQL is a relational database management system (RDMS) that uses Structured Query Language (SQL) to store data in tables. MySQL is an extremely fast, extremely robust, extremely popular and easily customizable open source database. A database is just what the name implies, a base for data. MySQL uses a relational database scheme to provide a stable and fast data access and query model. MySQL is well suited to, but not limited to web use with a language such as PHP. SQL is short for "Structured Query Language" commonly used to communicate with data in databases as defined by the ANSI/ISO SQL Standard.

The more recent version of MySQL have had database replication capabilities included, making MySQL a viable option for Enterprise use, directly comparable to Oracle, Postgresql, Sybase, MSSQL .

The MySQL GUI Tools are Graphical User Interfaces suited to MySQL database administration.

7. FileMaker Pro is a cross-platform database application from FileMaker Inc. (a subsidiary of Apple Inc.), known for its combination of power and ease of use. It is also noted for the integration of the database engine with its GUI-based interface, which allows users to modify the database by dragging new elements into the layouts/screens/forms that provide the user interface. This results in a "quasi-object" development environment of a kind that is still largely unique in the "industrial strength" database world.

FileMaker was one of a handful of database applications released for the Apple Macintosh in the 1980s.

FileMaker has compatible versions for both the Mac OS X and Microsoft Windows operating systems and can be networked simultaneously to a mixed Windows and Mac OS X user base. FileMaker is also scalable, being offered in desktop, server, web-delivery and mobile configurations.

8. Firebird (sometimes erroneously called FirebirdSQL) is a relational database management system offering many ANSI SQL:2003 features. It runs on Linux, Windows, and a variety of Unix platforms. Started as a fork of Borland's open source release of InterBase, the Firebird codebase is maintained by the Firebird Project at SourceForge.

New code modules added to Firebird are licensed under the Initial Developer's Public License (IDPL). The original code released by Inprise (as Borland was then called) is licensed under the InterBase Public License 1.0. Both licenses are modified versions of the Mozilla Public LFirebird 1.0 was essentially a bug-fixed version of the InterBase 6.0 open source edition with some minor new features. Development on the Firebird 2 codebase began with the porting of the Firebird 1.0 C code to C++, together with a major code-cleaning undertaking. Firebird 1.5 was the first release of the Firebird 2 codebase and as such a significant milestone for the developers and the whole project.

Around the 20th birthday of the InterBase/Firebird product line, original creator Jim Starkey recollected:

"September 4, 2004 is the 20th anniversary of what is now Firebird. I quit my job at DEC in August, took a three day end-of-summer holiday, and began work on September 4, 1984 in my new career as a software entrepreneur. As best as I can reconstruct, the first two files were cpre.c and cpre.h (C preprocessor), later changed to gpre.c and gpre.h. The files were created on a loaner DEC Pro/350, a PDP-11 personal computer that went exactly nowhere, running XENIX. Gpre was my first C program, XENIX was my first experience with Unix, and the Pro/350 was my very last (but not lamented) experience with PDP-11s."

More information on Firebird's history can be found on the InterBase/Firebird History pages.

icense 1.1.

9. Ingres (pronounced /iŋ-grεs'/) is a commercially supported, open-source relational database management system. Ingres was first created as a research project at the University of California, Berkeley starting in the early 1970s and ending in the early 1980s. The original code, like that from other projects at Berkeley, was available at minimal cost under a version of the BSD license. Since the mid-1980s, Ingres has spawned a number of commercial database applications, including Sybase, Microsoft SQL Server, NonStop SQL and a number of others. Postgres (Post Ingres), a project which started in the mid-1980s, later evolved into PostgreSQL.

Ingres

In 1973 when the System R project was getting started at IBM, the research team released a series of papers describing the system they were building. Two scientists at Berkeley, Michael Stonebraker and Eugene Wong, became interested in the concept after reading the papers, and decided to start a relational database research project of their own.

They had already raised money for researching a geographic database system for Berkeley's economics group, which they called Ingres, for INteractive Graphics REtrieval System. They decided to use this money to fund their relational project instead, and used this as a seed for a new and much larger project. For further funding Stonebraker approached the DARPA, the obvious funding source for computing research and development at the time, but both the DARPA and the Office of Naval Research (ONR) turned them down as they were already funding database research elsewhere. Stonebraker then introduced his idea to other agencies, and, with help from his colleagues he eventually obtained modest support from the NSF and three military agencies: the Air Force Office of Scientific Research, the Army Research Office, and the Navy Electronic Systems Command.

Thus funded, Ingres was developed during the mid-1970s by a rotating team of students and staff. Ingres went through an evolution similar to that of System R, with an early prototype in 1974 followed by major revisions to make the code maintainable. Ingres was then disseminated to a small user community, and project members rewrote the prototype repeatedly to incorporate accumulated experience, feedback from users, and new ideas. Ingres remained largely similar to IBM's System R in concept, but based on "low end" systems, namely Unix on DEC machines.

10. Informix is a family of relational database management system (RDBMS) products by IBM. It is positioned as IBM's flagship data server for online transaction processing (OLTP) as well as integrated solutions. IBM acquired the Informix technology in 2001 from Informix Software.

1980: Early history

Roger Sippl and Laura King worked at Cromemco, an early S-100/CP/M company, where they developed a small relational database based on ISAM techniques, as a part of a report-writer software package.

Sippl and King left Cromemco to found Relational Database Systems (RDS) in 1980. Their first product, Marathon, was essentially a 16-bit version of their earlier ISAM work, released on the Onyx operating system, a version of Unix for early ZiLOG microprocessors.

At RDS, they turned their attention to the emerging RDBMS market and released their own product as Informix (INFORMation on unIX) in 1981. It included their own Informer language. It featured the ACE report writer, used to extract data from the database and present it to users for easy reading. It also featured the PERFORM screen form tool, which allowed a user to interactively query and edit the data in the database. The final release of this product was version 3.30 in early 1986.

In 1985, they introduced a new SQL-based query engine as part of INFORMIX-SQL (or ISQL) version 1.10 (version 1.00 was never released). This product also included SQL variants of ACE and PERFORM. The most significant difference between ISQL and the previous Informix product was the separation of the database access code into an engine process (sqlexec), rather than embedding it directly in the client — thus setting the stage for client-server computing with the database running on a separate machine from the user's machine. The underlying ISAM-based file storage engine was known as C-ISAM.

Through the early 1980s Informix remained a small player, but as Unix and SQL grew in popularity during the mid-1980s, their fortunes changed. By 1986 they had become large enough to float a successful IPO, and changed the company name to Informix Software. The products included INFORMIX-SQL version 2.00 and INFORMIX-4GL 1.00, both of which included the database engine as well as development tools (I4GL for programmers, ISQL for non-programmers).

A series of releases followed, including a new query engine, initially known as INFORMIX-Turbo. Turbo used the new RSAM, with great multi-user performance benefits over C-ISAM. With the release of the version 4.00 products in 1989, Turbo was renamed INFORMIX-OnLine (in part because it permitted coherent database backups while the server was online and users were modifying the data), and the original server based on C-ISAM was separated from the tools (ISQL and I4GL) and named INFORMIX-SE (Standard Engine). Version 5.00 of Informix OnLine was released at the very end of 1990, and included full distributed transaction support with two-phase commit and stored procedures. Version 5.01 was released with support for triggers too.

1988: Innovative Software acquisition

In 1988, Informix purchased Innovative Software, makers of a DOS and Unix-based office system called SmartWare and WingZ, an innovative spreadsheet program for the Apple Macintosh.

WingZ provided a highly graphical user interface, supported very large spreadsheets, and offered programming in a HyperCard-like language known as HyperScript. The original release proved very successful, becoming the #2 spreadsheet, behind Microsoft Excel, although many WingZ users found it to be a superior product. In 1990, WingZ ports started appearing for a number of other platforms, mostly Unix variants. During this period, many financial institutions began investing in Unix workstations as a route to increasing the desktop "grunt" required to run large financial models. For a brief period, Wingz was successfully marketed into this niche. However it suffered from a lack of development and marketing resources, possibly due to a general misunderstanding of the non-server software market. By the early 1990s WingZ had become uncompetitive, and Informix eventually sold it in 1995. Informix also sold a license to Claris, who combined it with a rather updated GUI as Claris Resolve.

1994: Dynamic Scalable Architecture

With its failure in office automation products, Informix refocused on the growing database server market. In 1994, as part of a collaboration with Sequent Computer Systems, Informix released its version 6.00 database server, which featured its new Dynamic Scalable Architecture, DSA.

DSA involved a major rework of the core engine of the product, supporting both horizontal parallelism and vertical parallelism, and based on a multi-threaded core well suited towards the symmetric multiprocessing systems that Sequent pioneered and that major vendors like Sun Microsystems and Hewlett-Packard would eventually follow up on. The two forms of parallelism made the product capable of market-leading levels of scalability, both for OLTP and data warehousing.

Now known as Informix Dynamic Server (after briefly entertaining the name Obsidian and then being named Informix OnLine Dynamic Server), Version 7 hit the market in 1994, just when SMP systems were becoming popular and Unix in general had started to become the server operating system of choice. Version 7 was essentially a generation ahead of the competition, and consistently won performance benchmarks. As a result of its success Informix vaulted to the #2 position in the database world by 1997, pushing Sybase out of that spot with surprising ease.

Building on the success of Version 7, Informix split its core database development investment into two efforts. One effort, first known as XMP (for eXtended Multi-Processing), became the Version 8 product line, also known as XPS (for eXtended Parallel Server). This effort focused on enhancements in data warehousing and parallelism in high-end platforms, including shared-nothing platforms such as IBM's RS-6000/SP.

1995: Illustra acquisition

The second focus, which followed the late 1995 purchase of Illustra, concentrated on object-relational database (O-R) technology. Illustra, written by ex-Postgres team members and led by database pioneer Michael Stonebraker, included various features that allowed it to return fully-formed objects directly from the database, a feature that can significantly reduce programming time in many projects. Illustra also included a feature known as DataBlades that allowed new data types and features to be included in the basic server as options. These included solutions to a number of thorny SQL problems, namely time series, spatial and multimedia data. Informix integrated Illustra's O-R mapping and DataBlades into the 7.x OnLine product, resulting in Informix Universal Server (IUS), or more generally, Version 9.

Both new versions, V8 (XPS) and V9 (IUS), appeared on the market in 1996, making Informix the first of the "big three" database companies (the others being Oracle and Sybase) to offer built-in O-R support. Commentators paid particular attention to the DataBlades, which soon became very popular: dozens appeared within a year, ported to the new architecture after partnerships with Illustra. This left other vendors scrambling, with Oracle introducing a "grafted on" package for time-series support in 1997, and Sybase turning to a third party for an external package which remains an unconvincing solution.

1996 and 1997: Internal Problems

Although Informix took a technological lead in the database software market, product releases began to fall behind schedule by late 1996. Plagued with technical and marketing problems, a new application development product, Informix-NewEra, was soon overshadowed by the emerging Java programming language. Michael Stonebraker had promised that the Illustra technology would be integrated within a year after the late 1995 acquisition, but as Gartner Group had predicted, the integration required more than 2 years. Unhappy with the new direction of the company, XPS lead architect Gary Kelley suddenly resigned and joined arch-rival Oracle Corporation in early 1997, taking 11 of his developers with him. [1] Informix ultimately sued Oracle to prevent loss of trade secrets.

1997: Misgovernance

Failures in marketing and an unfortunate leadership in corporate misgovernance overshadowed Informix's technical successes. On April 1, 1997, Informix announced that first quarter revenues fell short of expectations by $100 million. CEO Phillip White blamed the shortfall on a loss of focus on the core database business while devoting too many resources to object-relational technology. [2] Huge operating losses and job cuts followed. Informix re-stated earnings from 1994 through 1996. A significant amount of revenue from the mid-1990's involved software license sales to partners who did not sell through to an end-user customer; this and other irregularies led to overstating revenue by over $200 million. Even after White's departure in July, 1997, the company continued to struggle with accounting practices, re-stating earnings again in early 1998. [3]

Repercussions from misgovernance

Although allegations of misgovernance continued to haunt Informix, the capabilities of Informix Dynamic Server (IDS) began to strengthen. New leadership began to emerge as well. An excerpt from the September 22, 1998 issue of PC Magazine's article on the top 100 companies that are changing the way you compute:

...Informix is battling rival Oracle in the object/relational arena by extending its flagship Informix Dynamic Server with a Universal Data Option. After a turbulent year that included a problematic audit, Robert Finnocchio was appointed as the new CEO of the Menlo Park, Calfiornia company. With 1997 revenues of $662.3 million, Informix has begun to strengthen its position in the database market.

In November 2002, Phillip White, the former CEO of Informix ousted in 1997, was indicted by a federal grand jury and charged with eight counts of securities, wire, and mail fraud. In a plea bargain thirteen months later, he pleaded guilty to a single count of filing a false registration statement with the U.S. Securities and Exchange Commission.

In May 2004, the Department of Justice announced White was sentenced to two months in federal prison for securities fraud, a fine of $10,000, along with a two-year period of supervised release and 300 hours of community service. The announcement noted that the amount of loss to shareholders from the violation, could not reasonably be estimated under the facts of the case [1]. White's earlier plea agreement had limited prison time to no more than 12 months.

German citizen and resident Walter Königseder, the company's Vice-President in charge of European operations, was also indicted by a federal grand jury but the United States has been unable to secure his extradition.

In November of 2005, a book detailing the rise and fall of Informix Software and CEO Phil White was published. Written by a long time Informix Employee, The Real Story of Informix Software and Phil White: Lessons in Business and Leadership for the Executive team [2]. provides an insider's account of the company showing a detailed chronology of the company's initial success, ultimate failure, and how CEO Phil White ended up in jail.

2001: Other acquisitions

Starting in the year 2000, the major events in Informix's history no longer centered on its technical innovations. That year, in March, Informix acquired Ardent Software, a company that had a history of mergers and acquisitions of its own. That acquisition added multi-dimensional engines UniVerse and UniData (known collectively as U2) to its already-numerous list of database engines at the time, which included not only the Informix heritage products, but a datawarehouse-oriented SQL engine from Red Brick and the 100% Java version of SQL, Cloudscape (which was later bundled with the reference implementation of J2EE).

Prior to its purchase, Informix's product lineup included:

  • Informix C-ISAM - the latest version of the original Marathon database
  • Informix SE - offered as a low-end system for embedding into applications
  • Informix OnLine - a competent system for managing medium size databases
  • Informix Extended Parallel Server (XPS, V8) - a high-end version of the V7 code base for use on huge distributed machines
  • Informix Universal Server (V9) - a combination of the V7 OnLine engine with the O-R mapping and DataBlade support from Illustra
  • Informix-4GL - A fourth generation language for application programming
  • Red Brick Warehouse - a data warehouse product
  • Cloudscape - an RDBMS written entirely in Java that fits into mobile devices on the low-end and J2EE-based architectures on the high end. In 2004 Cloudscape was released by IBM as an Open Source database to be managed by the Apache Software Foundation under the name Derby.
  • U2 suite, UniVerse and UniData - multidimensional databases that offer networks, hierarchies, arrays and other data formats difficult to model in SQL

IBM's takeover of Informix

In July 2000, the former CEO of Ardent, Peter Gyenes, became the CEO of Informix, and soon re-organized Informix to make it more attractive as an acquisition target. The major step taken was to separate out all of the database engine technologies from the applications and tools.

In April 2001, IBM took advantage of this reorganization and, prompted by a suggestion from Wal-Mart (Informix's largest customer),[4] bought from Informix the database technology, the brand, the plans for future development (an internal project codenamed "Arrowhead"), and the over 100,000-customer base associated with these.[5] The remaining application and tools company renamed itself Ascential Software. In May 2005, IBM bought Ascential, reuniting Informix's assets under IBM's Information Management Software portfolio.

ORACLE

The relational database management system (RDBMS) officially called Oracle Database (and commonly referred to as Oracle RDBMS or simply as Oracle) has become a major presence in database computing. Oracle Corporation produces and markets this software.

Larry Ellison and his friends and former co-workers Bob Miner and Ed Oates started the consultancy Software Development Laboratories (SDL) in 1977. SDL developed the original version of the Oracle software. The name Oracle comes from the code-name of a CIA-funded project Ellison had worked on while previously employed by Ampex.

Physical and logical structuring in Oracle

An Oracle database system comprises at least one instance of the application, along with data storage. An instance comprises a set of operating-system processes and memory-structures that interact with the storage. Typical processes include PMON (the process monitor) and SMON (the system monitor).

Users of Oracle databases refer to the server-side memory-structure as the SGA (System Global Area). The SGA typically holds cache information such as data-buffers, SQL commands and user information. In addition to storage, the database consists of online redo logs (which hold transactional history). Processes can in turn archive the online redo logs into archive logs (offline redo logs), which provide the basis (if necessary) for data recovery and for some forms of data replication.

The Oracle RDBMS stores data logically in the form of tablespaces and physically in the form of data files. Tablespaces can contain various types of memory segments; for example, Data Segments, Index Segments etc. Segments in turn comprise one or more extents. Extents comprise groups of contiguous data blocks. Data blocks form the basic units of data storage. At the physical level, data-files comprise one or more data blocks, where the block size can vary between data-files.

Oracle database management track its computer data storage with the help of information stored in the SYSTEM tablespace. The SYSTEM tablespace contains the data dictionary — and often (by default) indexes and clusters. (A data dictionary consists of a special collection of tables that contains information about all user-objects in the database). Since version 8i, the Oracle RDBMS also supports "locally managed" tablespaces which can store space management information in bitmaps in their own headers rather than in the SYSTEM tablespace (as happens with the default "dictionary-managed" tablespaces).

If the Oracle database administrator has instituted Oracle RAC (Real Application Clusters), then multiple instances, usually on different servers, attach to a central storage array. This scenario offers numerous advantages, most importantly performance, scalability and redundancy. However, support becomes more complex, and many sites do not use RAC. In version 10g, grid computing has introduced shared resources where an instance can use (for example) CPU resources from another node (computer) in the grid.

The Oracle DBMS can store and execute stored procedures and functions within itself. PL/SQL (Oracle Corporation's proprietary procedural extension to SQL), or the object-oriented language Java can invoke such code objects and/or provide the programming structures for writing them.







II-Oracle Database

The relational database management system (RDBMS) officially called Oracle Database (and commonly referred to as Oracle RDBMS or simply as Oracle) has become a major presence in database computing. Oracle Corporation produces and markets this software.

Larry Ellison and his friends and former co-workers Bob Miner and Ed Oates started the consultancy Software Development Laboratories (SDL) in 1977. SDL developed the original version of the Oracle software. The name Oracle comes from the code-name of a CIA-funded project Ellison had worked on while previously employed by Ampex.

An Oracle database system comprises at least one instance of the application, along with data storage. An instance comprises a set of operating-system processes and memory-structures that interact with the storage. Typical processes include PMON (the process monitor) and SMON (the system monitor).

Users of Oracle databases refer to the server-side memory-structure as the SGA (System Global Area). The SGA typically holds cache information such as data-buffers, SQL commands and user information. In addition to storage, the database consists of online redo logs (which hold transactional history). Processes can in turn archive the online redo logs into archive logs (offline redo logs), which provide the basis (if necessary) for data recovery and for some forms of data replication.

The Oracle RDBMS stores data logically in the form of tablespaces and physically in the form of data files. Tablespaces can contain various types of memory segments; for example, Data Segments, Index Segments etc. Segments in turn comprise one or more extents. Extents comprise groups of contiguous data blocks. Data blocks form the basic units of data storage. At the physical level, data-files comprise one or more data blocks, where the block size can vary between data-files.

Oracle database management track its computer data storage with the help of information stored in the SYSTEM tablespace. The SYSTEM tablespace contains the data dictionary — and often (by default) indexes and clusters. (A data dictionary consists of a special collection of tables that contains information about all user-objects in the database). Since version 8i, the Oracle RDBMS also supports "locally managed" tablespaces which can store space management information in bitmaps in their own headers rather than in the SYSTEM tablespace (as happens with the default "dictionary-managed" tablespaces).

If the Oracle database administrator has instituted Oracle RAC (Real Application Clusters), then multiple instances, usually on different servers, attach to a central storage array. This scenario offers numerous advantages, most importantly performance, scalability and redundancy. However, support becomes more complex, and many sites do not use RAC. In version 10g, grid computing has introduced shared resources where an instance can use (for example) CPU resources from another node (computer) in the grid.

The Oracle DBMS can store and execute stored procedures and functions within itself. PL/SQL (Oracle Corporation's proprietary procedural extension to SQL), or the object-oriented language Java can invoke such code objects and/or provide the programming structures for writing them.

III- A DBMS is a complex set of software programs that controls the organization, storage, management, and retrieval of data in a database. DBMS are categorized according to their data structures or types, some time DBMS is also known as Data base Manager. It is a set of prewritten programs that are used to store, update and retrieve a Database. A DBMS includes:

  1. A modeling language to define the schema of each database hosted in the DBMS, according to the DBMS data model.
    • The four most common types of organizations are the hierarchical, network, relational and object models. Inverted lists and other methods are also used. A given database management system may provide one or more of the four models. The optimal structure depends on the natural organization of the application's data, and on the application's requirements (which include transaction rate (speed), reliability, maintainability, scalability, and cost).
    • The dominant model in use today is the ad hoc one embedded in SQL, despite the objections of purists who believe this model is a corruption of the relational model, since it violates several of its fundamental principles for the sake of practicality and performance. Many DBMSs also support the Open Database Connectivity API that supports a standard way for programmers to access the DBMS.
  2. Data structures (fields, records, files and objects) optimized to deal with very large amounts of data stored on a permanent data storage device (which implies relatively slow access compared to volatile main memory).
  3. A database query language and report writer to allow users to interactively interrogate the database, analyze its data and update it according to the users privileges on data.
    • It also controls the security of the database.
    • Data security prevents unauthorized users from viewing or updating the database. Using passwords, users are allowed access to the entire database or subsets of it called subschemas. For example, an employee database can contain all the data about an individual employee, but one group of users may be authorized to view only payroll data, while others are allowed access to only work history and medical data.
    • If the DBMS provides a way to interactively enter and update the database, as well as interrogate it, this capability allows for managing personal databases. However, it may not leave an audit trail of actions or provide the kinds of controls necessary in a multi-user organization. These controls are only available when a set of application programs are customized for each data entry and updating function.
  4. A transaction mechanism, that ideally would guarantee the ACID properties, in order to ensure data integrity, despite concurrent user accesses (concurrency control), and faults (fault tolerance).
    • It also maintains the integrity of the data in the database.
    • The DBMS can maintain the integrity of the database by not allowing more than one user to update the same record at the same time. The DBMS can help prevent duplicate records via unique index constraints; for example, no two customers with the same customer numbers (key fields) can be entered into the database. See ACID properties for more information (Redundancy avoidance).

The DBMS accepts requests for data from the application program and instructs the operating system to transfer the appropriate data.

The DBMS accepts requests for data from the application program and instructs the operating system to transfer the appropriate data.

When a DBMS is used, information systems can be changed much more easily as the organization's information requirements change. New categories of data can be added to the database without disruption to the existing system.

Organizations may use one kind of DBMS for daily transaction processing and then move the detail onto another computer that uses another DBMS better suited for random inquiries and analysis. Overall systems design decisions are performed by data administrators and systems analysts. Detailed database design is performed by database administrators.

Database servers are specially designed computers that hold the actual databases and run only the DBMS and related software. Database servers are usually multiprocessor computers, with RAID disk arrays used for stable storage. Connected to one or more servers via a high-speed channel, hardware database accelerators are also used in large volume transaction processing environments.

DBMSs are found at the heart of most database applications. Sometimes DBMSs are built around a private multitasking kernel with built-in networking support although nowadays these functions are left to the operating system.