UNIT I LESSON – 1 INTRODUCTION TO COMPUTER SYSTEM
Sign up for access to the world's latest research
Abstract
AI
AI
The lesson provides an introduction to computer systems, covering key characteristics of computers such as speed, storage, accuracy, versatility, automation, diligence, and reliability. It discusses the historical evolution of computing devices, starting from early counting tools like the abacus to the modern computational systems. Additionally, the lesson touches on the importance of office automation systems, including various applications that enhance operational efficiency and support decision-making processes.
Related papers
reglas de aplicacion
Definition, classification, history, Hardware and Softeware
2009
The author has attempted to write this material so that it will be easily understood by those who have had only limited experience with computers. To aid those readers, several terms and concepts have been defined, and Chapter 1 includes a brief discussion of principles of computer operation, programming, data-preparation problems, and automatic programming. Engineering terminology has been held to a minimum, and the history of programmer training, personnel and organizational growth, and-the like has not been treated. To some small extent, the comments on operational utility bring out the very real usefulness of computers for the solution of data-processing problems.
2005
The main ingredient here is the repeated division by 16. By dividing by 16 again and again, we are building up powers of 16. For example, in the line 4 CHAPTER 1. INFORMATION REPRESENTATION AND STORAGE Divide 1350 by 16, yielding 84, remainder 6. above, that is our second division by 16, so it is a cumulative division by 16 2. [Note that this is why we are dividing by 16, not because the number has 16 bits.] 1.2.3 There Is No Such Thing As "Hex" Storage at the Machine Level! Remember, hex is merely a convenient notation for us humans. It is wrong to say something like "The machine stores the number in hex," "The compiler converts the number to hex," and so on. It is crucial that you avoid this kind of thinking, as it will lead to major misunderstandings later on. 1.3 Main Memory Organization During the time a program is executing, both the program's data and the program itself, i.e. the machine instructions, are stored in main memory. In this section, we will introduce main memory structure. (We will usually refer to main memory as simply "memory.") 1.3.1 Bytes, Words and Addresses 1.3.1.1 The Basics Memory (this means RAM/ROM) can be viewed as a long string of consecutive bytes. Each byte has an identification number, called an address. Again, an address is just an "i.d. number," like a Social Security Number identifies a person, a license number identifies a car, and an account number identifies a bank account. Byte addresses are consecutive integers, so that the memory consists of Byte 0, Byte 1, Byte 2, and so on. On each machine, a certain number of consecutive bytes is called a word. The number of bytes or bits (there are eight times as many bits as bytes, since a byte consists of eight bits) in a word in a given machine is called the machine's word size. This is usually defined in terms of the size of number which the CPU addition circuitry can handle, which in recent years has typically been 32 bits. In other words, the CPU's adder inputs two 32-bit numbers, and outputs a 32-bit sum, so we say the word size is 32 bits. Most CPUs popular today have 32-bit or 64-bit words. As of December 2006, the trend is definitely toward the latter, with many desktop PCs having 64-bit words. Early members of the Intel CPU family had 16-bit words, while the later ones were extended to 32-bit and then 64-bi size. In order to ensure that programs written for the early chips would run on the later ones, Intel designed the later CPUs to be capable of running in several modes, one for each bit size. 1.3. MAIN MEMORY ORGANIZATION 5 Note carefully that most machines do not allow overlapping words. That means, for example, that on a 32-bit machine, Bytes 0-3 will form a word and Bytes 4-7 will form a word, but Bytes 1-4 do NOT form a word. If your program tries to access the "word" consisting of Bytes 1-4, it may cause an execution error. On UNIX systems, for instance, you may get the error message "bus error." However, an exception to this is Intel chips, which do not require alignment on word boundaries like this. Just as a bit string has its most significant and least significant bits, a word will have its most significant and least significant bytes. To illustrate this, suppose word size is 32 bits and consider storage of the integer 25, which is 00000000000000000000000000011001 in bit form and 0x00000019 as hex. Three bytes will each contain 0x00 and the fourth 0x19, with the 0x19 byte being the least significant and the first 0x00 byte being most significant. Not only does each byte have an address, but also each word has one too. The address of a word will be the address of its lowest-address byte. So for instance Bytes 4-7 comprise Word 4. 1.3.1.2 Word Addresses 1.3.1.3 "Endian-ness" Recall that the word size of a machine is the size of the largest string on which the hardware is capable of performing addition. A question arises as to whether the lowest-address byte in a word is treated by the hardware as the most or least significant byte. The Intel family handles this in a little-endian manner, meaning that the least significant byte within a word has the lowest address. For instance, consider the above example of the integer 25. Suppose it is stored in Word 204, which on any 32-bit machine will consist of Bytes 204, 205, 206 and 207. On a 32-bit Intel machine (or any other 32-bit little-endian machine), Byte 204 will be the least significant byte, and thus in this example will contain 0x19. Note carefully that when we say that Byte 204 contains the least significant byte, what this really means is that the arithmetic hardware in our machine will treat it as such. If for example we tell the hardware to add the contents of Word 204 and the contents of Word 520, the hardware will start at Bytes 204 and 520, not at Bytes 207 and 523. First Byte 204 will be added to Byte 520, recording the carry, if any. Then Byte 205 will be added to Byte 521, plus the carry if any from the preceding byte addition, and so on, through Bytes 207 and 523. SPARC chips, on the other hand, assign the least significant byte to the highest address, a big-endian scheme. This is the case for IBM mainframes too, as well as for the Java Virtual Machine. 1.4. REPRESENTING INFORMATION AS BIT STRINGS 15 Note that the floating-point number is being stored is (except for the sign) equal to (1 + M/2 23) × 2 (E−127) (1.8) where M is the Mantissa and E is the Exponent. Make sure you agree with this. With all this in mind, let us find the representation for the example number 1.625 mentioned above. We found that the mantissa is 1.101 and the exponent is 0, and as noted earlier, the Mantissa Field is 10100000000000000000000. The Exponent Field is 0 + 127 = 127, or in bit form, 01111111. The Sign Bit is 0, since 1.625 is a positive number. So, how are the three fields then stored altogether in one 32-bit string? Well, 32 bits fill four bytes, say at addresses n, n+1, n+2 and n+3. The format for storing the three fields is then as follows: • Byte n: least significant eight bits of the Mantissa Field • Byte n+1: middle eight bits of the Mantissa Field • Byte n+2: least significant bit of the Exponent Field, and most significant seven bits of the Mantissa Field • Byte n+3: Sign Bit, and most significant seven bits of the Exponent Field Suppose for example, we have a variable, say T, of type float in C, which the compiler has decided to store beginning at Byte 0x304a0. If the current value of T is 1.625, the bit pattern will be Byte 0x304a0: 0x00; Byte 0x304a1: 0x00; Byte 0x304a2: 0xd0; Byte 0x304a3: 0x3f The reader should also verify that if the four bytes' contents are 0xe1 0x7a 0x60 0x42, then the number being represented is 56.12. Note carefully: The storage we've been discussing here is NOT base-10. It's not even base-2, though certain components within the format are base-2. It's a different kind of representation, not "base-based." 1.4.3 Representing Character Data This is merely a matter of choosing which bit patterns will represent which characters. The two most famous systems are the American Standard Code for Information Interchange (ASCII) and the Extended Binary Coded Decimal Information Code (EBCDIC). ASCII stores each character as the base-2 form of a

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.