The past, the present and the future of Computer Hardware and its impact on GIS
In 1965, Gorden Moore, one of the Intel’s co-founder, made a remarkable observation during a speech that every new chip contained roughly twice as many transistor as its predecessors, and each chip was released within 12-18 months of the previous chip. This, later on was coined as “Moore’s Law”. Today Moore, the Chairman Emeritus of Intel says, “To be honest, I did not expect this law to still be true three decades later.” The number of transistors on a chip has increased more than 3,200 times, from 2,300 on the 4004 in 1971 to 7.5 million on the Pentium II processor. Intel currently makes more than 1017 transistors per year, roughly 20 million transistors for every person present on the earth surface.
But in the initial days it did not catch fancy of stalwarts in terms of economical processing power and the wide area of application the computer would have in the years to come. Thomas Watson, chairman of IBM in 1943, viewed computer’s future as “I think there a world market for five computers only”. In fact the recent observation of software visionary, Bill Gates in 1981 “640 K is ought to be enough for anybody”, shows how the pace of technological development has defied the imagination of many. Nobody could predict the trailblazing path of computer hardware. What is the fall out of this?
Computer and in specific Hardware and Software related with GIS, have moved out of the realm of research and development and joined the common man’s land of application specific usage which was not thought of when GIS was being conceived. What started as a system to manage Land Records is being used to manage applications in telecom, utility, transportation, forestry, agriculture and perhaps any field where one is using database management can have GIS application.
GIS for a long time was in a field where analogous techniques were used. During 1960s and 1970s, processing capacity were limited and hardware being very expensive, GIS was confined to limited application. More elaborate use of computers in the field of GIS, required substantial investment in machinery that was affordable only by big organisations. Spatial data processing is more demanding type of data processing, wherein the spatial relationships such as geographical map have to be accounted for, in many algorithms. Spatial data processing tend to become quadratic and of higher order very soon. It becomes more complex, when data has to be processed with respect to a given time frame. High data volume is another attribute, which adds to its complexity, in addition to the result, which has to be very accurate. The accuracy would ensure their usage as a reference point for the next computing layer. These attributes make the spatial data processing very tough. The ability to process large data at low cost was determinant in confining GIS in research laboratories and large organisation, for over two decades.
The very fact that almost 70 percent of the data has geographical location as its denominator speaks volume of further untapped areas of GIS based application. The “Fission and Fusion” of technological entities has become the order of the day. The fusion of mobile telephony with the ability to find the geographical location of the mobile device is leading to a new field of Location Based Services. Be it serendipity or result of years of focused research, the common man will continue to be awed by the wonders the power of computation can achieve.
Can any one guess what the computer will be in the year 2025? How many more areas of applications it will inundate? How many more Information Systems Subsets like GIS will come out of research labs.
Computer hardware refers to physical and tangible component of a computer system. The GIS hardware comprises of survey equipments, map digitizers, scanners, printers, plotters, and tertiary storage device, apart from Computer Unit. (See the various components of GIS hardware discussed in the previous issues of [email protected])
The discussion of computer hardware system architecture can be viewed in terms of:
It has central computing facilitating multi-user and multi-tasking oriented operating system along with self-contained and independent application programs. It has dumb terminal end-user interface device, command oriented operating system, and procedural programming languages. Professional programmers/operators manage the system for end-users, which need not be technical.
It came into vogue in the 1980’s when Apple Computer and IBM started shipping Personal Computers (PC). The PC had stand-alone operating system, with self-contained and independent application programs in which the end-user was also the programmer/operator of the system.
These had linking of multiple computer processors typically using network operating systems, under Client/Server network architecture. With distributed computing, an application could be stored on one computer, executed on another computer, and the results displayed on the third computer.
A more centralized system build around Client/Server network architectures where minimal configuration client computers are dependent upon the server computer for operation. The proponents of the Network Computer suggest significantly lower first cost and operating costs due to its centralized design.
Elements of Personal Computing Hardware
In 1969 Douglas Engelbart of Stanford Research Institute, had demonstrated his system of keyboard, keypad, mouse and windows. He also demonstrated use of word processor, a hypertext system and remote collaborative work with colleagues. It was 21 years later on 22nd May 1990 Microsoft introduced and launched Microsoft Windows 3.0, which opened a new chapter in the user friendly operating system. Although it was predicted that Windows will go back to the closet, but the time was ripe with sufficient hardware capabilities at reasonable cost, fuelled the phenomenal sales of Windows OS. This also marked a new era in computer industry wherein the powers of processing were also being utilized for Graphical User Interface and developing user-friendly environment. These factors have had very positive impact on the development of GIS as an industry.
In common parlance the computer has been referred with the speed with which it’s processor runs or the amount of primary memory it has. For an example PII with 128MB RAM. But, the data processing capacity, data transmission capacity and data storage capacity of computer do not depend exclusively on the functioning of CPU, but a host of other units namely Motherboard which supports chipsets, Memory, Display Cards, Secondary Storage Device ability to read and write and the strength of the graphics card to produce the required graphical display. Some of the primary components of personal computer are given below.
Central Processing Unit
A microprocessor or a CPU is an integrated circuit built on a tiny piece of silicon containing millions of transistors (post 386 phase), which are interconnected through superfine traces of aluminum. The transistors work together to store and manipulate data so that the microprocessor could perform wide variety of useful functions. The particular function of microprocessor is guided by software (Intel).
The journey of CPU began in 1969 with Intel’s Marcian Hoff and Stan Mazor design of a 4-bit CPU chip set architecture, which could receive instructions and perform simple function on data. This CPU was called 4004, which became part of MCS-4 (Microcomputer System 4-bit) released in November 1971. The MCS had the CPU clock speed of 108kHZ and could address 640 bytes. In 1981 IBM selected 8086 processor for its’ IBM PC, marked the new era of Intel processor design recognition.
The Intel processors were based on Complex Instruction Set Computing (CISC), which allows the number of bytes per instructions to vary according to the instruction being processed. Another instruction set in vogue was Reduced Instruction Set Computing (RISC), which had fixed length instructions. RISC based system became popular in 1990s with the launch of RISC based workstation line RS/6000.
Till early 1990s CPU looked for RAM to seek information and write back to it. But the RAM was not able to keep up with the increasing speed of CPU. To cut down the wait state of CPU an intermediate level memory “CACHE” was introduced which was smaller in size but extremely fast. The onboard cache or Primary Cache usually operated at the speed of 12-25ns and varied in size staring from 32KB.
During 1995 Intel launched its 6th generation processor Pentium Pro. The significant feature about this processor was integration of 2nd level cache onto the processor module itself. This Secondary level cache or L2 Cache was very small in size 4-16 KB. Level 2 was important because the most commonly used instruction can be stored there on the chip itself. So that CPU does not has to reach into a slower system memory to retrieve their instructions. L2 cache is many times faster than the system memory. Hence the CPU having L2 cache has an edge over others. Quite often a smaller size cache can outperform a much larger one if the smaller one is slightly faster. Size of L2 cache ranges now days from 128 KB (Celeron) to 1 GHz (Pentium III Xenon). Today microprocessors are available with processing speed of 1 GHz (clock speed). Intel has been the traditional trendsetter in the development and launch of new CPUs, but off late AMD (Advanced Micro Devices Inc.) has been developing chips running at speed comparable to and better than those from Intel. Intel’s new Netburst MicroArchitechture for Pentium 4, has been designed with specific focus on Internet, imaging, streaming video, 3-D, multimedia and multitasking user environments.
Motherboards & Chipsets
The motherboard is the primary printed circuit board, where all of the basic circuitry and components required for a PC it to function are either contained in or attached to it. The motherboard typically contains the system bus, processor and coprocessor sockets, memory sockets, serial and parallel ports, expansion slots, and peripheral controllers. The motherboard may be alternatively referred to as the mainboard or system board. It is the central component that enables all of the other parts of a computer to mesh. On the motherboard are mounted an important group of chips known as chipset. Chipset is an integrated set of VLSI chips that perform all the vital functions of a computer system, including the function that once required separate chips. Chipset acts as a hub on the motherboard with every bit of information stored in the memory or being sent to an I/O device passing through the chipset. These chipsets are designed around the specification of the CPU for which it is to be used.
The first motherboards implemented a specific design known as the North Bridge – South Bridge architecture, where the motherboards were designed primarily around two chips. One located near the upper edge of the board (north bridge) controlled the transfer and co- ordination of information between the processor and the system memory. The other chip located near the lower edge (hence the name – south bridge) and takes care of transfer of data between the interface slots and the system BIOS. An important factor determining the system performance is Front side Bus Speed (FSB) of the motherboard. The FSB refers to the speed with which the motherboard can talk of the other components in a computer system. Pentium III systems have 133 MHz FSB and AMD Athlon uses a 200 MHz FSB.
As the CPU operates much faster than hard disk drives and other secondary storage devices, for efficiency, the data to be processed must reside in Primary Memory, which is very fast. From primary memory, data and program code can be obtained by the CPU and processed. There are two types of memory: RAM – random access memory, or read/write memory, which loses its contents when the machine is switched off, and ROM – read only memory, which never loses its contents unless destroyed. ROM is normally used for storing those most fundamental parts of the operating system, which are required when a computer is switched on. BIOS are a permanent Read Only Memory (ROM) that contains the initial boot program used when the computer is first turned on.
Cache is a high-speed memory used as an intermediate storage area used to facilitate the access of primary memory by the CPU. The access speed of primary memory needs to be as fast as CPU, but fast primary memory is expensive. Hence, faster cache memory is used to speed up the CPU’s access of memory. See CPU section for details.
“Non Volatile Data Storage” (retains data even when power is off.) are the most common types of secondary storage popularly known as Hard Disk Drives (HDD). The first Winchester 5 high-speed route for transferring information back and forth between the graphic card, main memory and the system’s processor. 3d visualisation and rendering involves complex calculation. Most of these calculations are done in the CPU by 3D accelerators. These features are standard on Intel (Katmai New Instruction) and AMD (3D Now!).
Categories of Computers
The basic category of computer changes with the advancement in hardware technology. From a technical standpoint, computer can be classified as:
Embedded Computers are specific purpose computers designed for controlling machine processes. Examples are in TV, radios, ATMs, etc.
Hand-Held / Personal Communication Assistant (PCA) are devices which use a minimal OS such as Windows CE or Geoworks, and may use a wireless Personal Communication Services (PCS) form of communication Network Computer a centralized system build around Client/Server network architectures where minimal configuration client computers are dependent upon the server computer for computers for all but the most elementary functions.
Microcomputers (also Desktop or Personal Computers) are the smallest general-purpose computers, generally used to support personal computing systems with single screen and single user. Micros may also be used to access data and systems on minicomputers and mainframe computers, and may be used for such CPU intensive operations as graphic generations and simulations.
Workstation Computers are powerful single user computer that is used for complex data analysis and design work.
Minicomputers are midsize computers often used to support work groups in large organizations or to perform corporate computing for smaller organizations (less than $40 million). Minicomputers typically have dozens of terminals connected.
Mainframe Computers are the large computers typically used for transaction processing systems that support organizational information systems. Mainframe computers typically have hundreds of terminals connected. Mainframe computers may be the primary computer in a centralized computing system or a server computer in large organizational client/server system.
Super Computers are the largest and fastest specialized high-speed computer used for lengthy calculations rather than processing transactions or generating reports.
Present hardware technology allows enormous amount of data to be generated, processed and stored. But the usefulness of this data with GIS perspectives is important. Increasingly more and more numbers of satellite are going in orbit scanning every single square metre of the earth’s surface. Most of the imagery lies unused for years. How can we derive useful information from this massive geodata flow? What do we need GIS software
- Chronology of Personal Computers
- Hardware Information Page
- Motherboard Home World
- HardwareCentral Tutorials
- What is Microprocessor?
- Introduction to Processor Cache
- Secondary Storage
- Motherboard definition