16-bit LNS Conversions

This CGI simply allows you to convert between 16-bit LNS (Log Number System) values and their integer representations. In EE480, we have adopted a LNS format that is essentially a sign bit followed by a 15-bit unsigned LNS magnitude. However, the magnitude is encoded with a bias of 0x4000 added and is scaled by 2128. The result is a dynamic range of [2.95469e-39 .. 3.36617e+38]. The LNS value that is all 0 bits is 0. The minimum LNS magnitude with a negative sign represents nan. The maximum LNS magnitude represents +/- inf.


Enter/edit any of the following:

The decimal floating-point value:
1 becomes the LNS representation 0x4000, which is really 1
-1 is 0xc000

The hexadecimal LNS representation:
The LNS value 0x8000 represents nan

Operate on the following pair of hexadecimal LNS values:
op
0x4000 * 0x4000 = 0x4000; in decimal, 1 * 1 = 1
0x4000 / 0x4000 = 0x4000; in decimal, 1 / 1 = 1
0x4000 + 0x4000 = 0x4080; in decimal, 1 + 1 = 2
0x4000 EQ 0x4000; in decimal, 1 EQ 1


The C program that generated this page was written by Hank Dietz using the CGIC library to implement the CGI interface.


The Aggregate. Advanced Computer Architecture.