Whether it’s for a family photo album, a computer program, or a Fortune 500 company’s business-critical system, data storage is a must-have for nearly everyone. As technology has evolved, computers have allowed for increasingly capacious and efficient data storage, which in turn has allowed increasingly sophisticated ways to use it.
Data storage devices have evolved drastically from being large trunks with the capacity to hold a few kilobytes of data, to microchips able to hold a few gigabytes of data.
Data is still growing at explosive rates in today’s businesses. Big Data is increasing storage demands in a way that could only be imagined just a few short years ago. A typical data record has tripled if not quadrupled in size in just the last five years, however this data now has many forms including structured, semi-structured and non-structured. In fact, according to a recent IBM study, 2.5 quintillion bytes of data are written every day and 90% of global data has been created in the last two years alone. It is glaringly apparent that the size of databases is growing exponentially.
Punch cards were the first effort at Data Storage in a machine language. Punch cards were used to communicate information to equipment “before” computers were developed. The punched holes originally represented a “sequence of instructions” for pieces of equipment, such as textile loomsand player pianos. The holes acted as on/off switches.
In 1837, a little over 100 years later, Charles Babbage proposed the Analytical Engine, a primitive calculator with moving parts, that used punch cards for instructions and responses. Herman Hollerith developed this idea, and made the Analytical Engine a reality by having the holes represent, not just a sequence of instructions, but stored data the machine could read.
In the 1960s, “magnetic storage” gradually replaced punch cards as the primary means for data storage but By 1990, the combination of affordable personal computers and “magnetic disk storage” had made punch cards nearly obsolete.
In the past, the terms “Data Storage” and “memory” were often used interchangeably. However, at present, Data Storage is an umbrella phrase that includes memory. Data Storage is often considered long term, while memory is frequently described as short term.
Vacuum Tubes for Random Access Memory
In 1948, Professor Fredrick Williams, and colleagues, developed “the first” Random Access Memory (RAM) for storing frequently used programming instructions, in turn, increasing the overall speed of the computer. Data in RAM (sometimes called volatile memory) is temporary and when a computer loses power, the data is lost, and often frustratingly irretrievable. ROM (Read Only Memory), on the other hand, is permanently written and remains available after a computer has lost power.
Magnetic Core, Twistor & Bubble Memory
In the late 1940s, magnetic core memory was developed, and patented, and over ten years, became the primary way early computers wrote, read, and stored data later In 1953, MIT purchased the patent, and developed the first computer to use this technology, called the Whirlwind. Magnetic core memories, being faster and more efficient than punch cards, became popular very quickly. However, manufacturing them was difficult and time consuming. It involved delicate work, using women with steady hands and microscopes to tediously thread thin wires through very small holes.
The Twistor Magnetic Memory was invented in 1957 by Andrew Bobeck. It creates computer memories using very fine magnetic wires interwoven with current-carrying wire. It is similar to core memory, but the wrapped magnetic wires replace the circular magnets, and each intersection on the network represents one bit o’ data.
The Twistor Memory concept led Mr. Bobeck to develop another short-lived magnetic memory technology in the 1980’s, known as Bubble Memory. Bubble memory is a thin magnetic film using small magnetized areas which look like bubbles.
In 1966, the newly formed Intel Corporation began selling a semiconductor chip with 2,000 bits of memory. A semiconductor memory chip stores data in a small circuit referred to as a memory cell. Memory cells are made up of miniaturized transistors and/or miniaturized capacitors, which act as on/off switches.
Magnetic Disk Storage
Magnetic drums were the first incarnation of magnetic disk storage. The drums read/write heads were designed for each drum track, using a staggered system over the circumference. Without head movement to control, access time is quite short, being based on one revolution of the drum. If multiple heads are used, data can be transferred quickly, helping to compensate for the lack of RAM in these systems.
In the 1960s, an inventor named James T. Russel thought about, and worked on, the idea of using light as a mechanism to record, and then replay “music.” And no one took his invention of the optical disc seriously, until 1975. This was when Sony paid Russel to finish his project leading to CDs (Compact Discs) and DVDs (Digital Video Recordings) and Blu-Ray. (The word “disk” is used for magnetic recordings, while “disc” is used for optical recordings.
The Magneto-Optical disc, as a hybrid storage medium, was presented in 1990. This disc format uses both magnetic and optical technologies for storing and retrieving digital data. The discs normally come in 3.5 and 5.25 inch sizes. The system reads sections of the disc with different magnetic alignments. Laser light reflected from the different polarizations varies, per the Kerr effect, and provides an on/off, bit of data storage system.
Flash drives appeared on the market, late in the year 2000. A flash drive plugs into computers with a built-in USB plug, making it a small, easily removable, very portable storage device. Unlike a traditional hard drive, or an optical drive, it has no moving parts, but instead combines chips and transistors for maximum functionality. Generally, a flash drives storage capacity ranges from 8 to 64 GB. (Other sizes are available, but can be difficult to find.)
A flash drive can be rewritten nearly a limitless number of times and is unaffected by electromagnetic interference (making them ideal for moving through airport security). Because of this, flash drives have entirely replaced floppy disks for portable storage. With their large storage capacity, and low cost, flash drives are now on the verge of replacing CDs and DVDs.
Flash drives are sometimes called pen drives, USB drives, thumb drives, or jump drives. Solid State Drives (SSD) are sometimes referred to as flash drives, but they are larger and clumsy to transport.
Solid State Drives (SSD)
Variations of Solid State Drives have been used since the 1950s. An SSD is a nonvolatile storage device that basically does everything a hard drive will do. It stores data on interlinked flash memory chips. The memory chips can either be part of the system’s motherboard or a separate box that is designed and wired to plug into a laptop, or a desktop hard drive. The flash memory chips are different than those used for USB thumb drives, making them faster and more reliable. As a result, an SSD is more expensive than a USB thumb drive of the same capacity.
SSDs “can” be portable, but will not fit in your pocket.
Data Silos are a data storage system, of sorts. Data Silos store data for a business, or a department of the business, that is incompatible with their system, but is deemed important enough to save for later translation. For many businesses, this was a huge amount of information. Data Silos eventually became useful as a source of information for Big Data and came to be used deliberately for that purpose. Then came Data Lakes.
Data Lakes were formed specifically to store and process Big Data, with multiple organizations pooling huge amounts of information into a single Data Lake. A Data Lake stores data in its original format and is typically processed by a NoSQL database (a Data Warehouse uses a hierarchical database). NoSQL processes the data in all its various forms, and allows for the processing of raw data. Most of this information could be accessed by its users via the internet.
Cloud Data Storage
The Internet made the Cloud available as a service. Improvements within the Internet, such as continuously lowering the cost of storage capacity and improved bandwidth, have made it more economical for individuals and businesses to use the Cloud for data storage. The Cloud offers essentially an infinite amount of data storage to its user. Cloud services provide near-infinite scalability, and accessibility to data from anywhere, at anytime. Is often used to backup information initially stored on site, making it available should the company’s own system suffer a failure. Cloud security is a significant concern among users, and service providers have built security systems, such as encryption and authentication, into the services they provide.
Over the years, technology has evolved and our lifestyle has changed, so has our everlasting hunger for more and more data storage capacity. After all these years of evolution we now have some extremely high capacity and powerful storage devices which are shaping the world of computers and meeting our current needs for data storage. First is, the Seagate 4TB hard disk drive, a sleek HDD offering enormous storage space (4TB), and high data transfer rate of nearly 1GB/s
Second is the world’s first 1TB USB flash drive introduced by Kingston. This is surely one powerful little device with a data transfer rate of about 240MB/s
Third is the Lexar microSDXC memory card, which offers a storage capacity of about 256GB, data transfer at 90MB/s
This is another great leap in the storage technology. To give a brief introduction, all data is stored in a storage called “The Cloud” which can be accessed from any device and from anywhere using the internet.
This has made data much more portable, as now we don’t need to carry hard disks or any storage device with us. We can access our data anywhere and anytime from any device we want using Cloud Storage technology.
Currently Cloud Storage is available from a number of providers such as Dropbox, Box and Google Drive.
The world’s appetite for storage is continuously growing thanks to the number of devices that are generating data, which is contributing to the warehouses of information kept and classified as big data. The growth in big data is fueling innovations in developing cognitive learning applications, such as artificial intelligence, machine learning, and big-data analytics. This in turn is creating a world that is more predictive, productive, and personal because it enables people to make more informed decisions.
Meanwhile, innovations in storage such as solid-state drives have helped to make tasks that were previously impossible – due to performance bottlenecks – entirely possible. For instance, as social media has grown rapidly to accommodate user bases that number in the hundreds of millions, the demand for updated, real-time information has created latency problems that have been solved and reduced by three-fold thanks to the performance capabilities of SSDs.
Object storage on the other hand, enables public and private storage clouds to manage data at enormous scale. Amazon’s S3 protocol has become the accepted industry standard because it enables access to data anywhere, at any time, across any device. Object storage is able to achieve this voluminous comprehension of data by scaling out instead of scaling up, which can accommodate any required level of storage capacity.
While SSDs and object storage are the latest trends that are transforming storage now, what may be surprising is that this evolution of storage goes back much farther than just the last few years. The defining characteristic of SDS is that it lifts the intelligence of the storage system out of hardware and into an overarching layer of software. That approach acknowledges the reality that it is simply much easier to do things in software than in hardware.
Software Defined Storage
Shifting the storage system’s intelligence from hardware to software gives an SDS implementation some important advantages over traditional storage solutions. SDS-based storage is inherently highly scalable. Rather than scaling up by adding additional drives behind a storage controller, SDS systems scale out by incorporating additional nodes.
Perhaps the greatest advantage of SDS is its substantial reduction in management complexity. Administrators, users, and applications (via APIs) interact with the system through a single, consistent interface that is the same no matter what mix of storage hardware devices may be employed. In fact, the software-defined paradigm provides users with the ability to manage multiple data centers as if they were a single computer.
Because with SDS the entire system is managed at a granular level by the software, the most sophisticated functionality can be implemented and controlled at a central location, and uniformly applied across the system. For example, both low-level functions such as deduplication, replication, and snapshots, as well as high level features such as backup/restore and disaster recovery regimes, can be implemented once and extended to all devices in the system, no matter what their individual capabilities or characteristics might be. Administrators are not required to concern themselves with the idiosyncrasies of the various hardware/firmware configurations that may be included in the system.
After 57 years of continuous development we see that once large wardrobe sized hard disk drives contained a few megabytes of data, and now a chip, hardly the size of a coin holds a quarter of a terrabyte of data. Also with the invention of Cloud Storage and Wireless storage technology, data storage and transfer has become painless.
Technology has taken huge leaps and we, the end users are enjoying the fruits of these advancements, and I hope that we will continue to see such amazing products like these in the future.