TECHNOLOGY REVOLUTIONS How the Motion Picture Industry Has Advanced
by James Jaeger, Paul Gibbons and Ken Gullekson
People that speak of THE technology revolution - especially in the motion picture industry - may be surprised to realize that we have at least ten technology revolutions going on in the entertainment industry alone. This paper will attempt to identify each revolution and discuss the problems and challenges we face every time we push a button or click a mouse.In rough order of appearance, the technology revolutions that have been impinging on cameras, computers, software, transfer equipment and various devices, making them easy, confusing or impossible to use are as follows:
THE CHEMICAL vs. ELECTRONIC REVOLUTION It could be argued that some of these revolutions are subsets of existing revolutions, if not wars or battles in their own right, but for now we will break them down into these categories.THE ANALOG vs. DIGITAL REVOLUTION
THE LINEAR vs. RANDOM ACCESS REVOLUTION
THE CGI REVOLUTION
THE ASPECT RATIO REVOLUTION
THE RESOLUTION REVOLUTION
THE INPUT DEVICE REVOLUTION
THE MOBILE REVOLUTION
THE CONNECTIVITY REVOLUTION
THE CENTRAL vs. DISTRIBUTED REVOLUTION
THE CHEMICAL vs. ELECTRONIC REVOLUTIONThe most obvious revolution has been the ongoing struggle between film and video tape - film being a "chemical" process and tape being an "electronic" process. This revolution could be considered to have started the moment the first video TV camera was marketed by RCA and has continued to this day.
Film started out in various formats ranging from 35mm, to 16mm and later to 65mm negative printed on 70mm film stock for release. In the amateur field, "Double 8" film became "Super 8" film, both 8 millimeters wide but each having different picture areas and sprocket sizes. Regardless, all film runs through a mechanical transport device known as a camera and is projected by a mechanical device known as a projector. The propagation-rate of film ranged from 16 frames per second (fps) to 60 fps in a process known as ShowScan. The most normal propagation rate is 24 fps in the United States. For over 100 years, the chemical image of film - exposed silver halide crystals on a clear plastic base (of first cellulose nitrate, later cellulose acetate and polyester) - has been king.
Because film is literally made with silver, it is expensive. This expense dictated that only the Hollywood movie studios, with built-in national distribution systems, could easily afford to use it. But people other than the 10,273 people that work and profit in Hollywood wanted to make movies too. This "free market" competition thus gave way to an entirely new technology, one based not on expensive film, but on video tape, which became a cheap image-storing medium that almost anyone could and can afford. Thus was born the modern television industry in direct competition with Hollywood's monopoly on the chemical image.
As the decades rolled on, both film and video technologies improved. Kodak and others offered film stocks with greater resolution, color saturation, latitude, sensitivity and shadow detail. Lowlight, 5295, for instance, was a breakthrough that made features like the nightlife-saturated TAXI DRIVER possible. Video tape could not compete in this area as "low lux" video cameras did not exist at this time. Also, video had to be "exhibited" on a small squareish box known as a "television," or "TV" for short. This 4 x 5 box - a little wider than high - had annoying horizontal lines and all manner of "artifacts" flickering throughout. Film presentations may have had scratches and blotches, but "grain" was seen as an "artistic expression" and the wider "aspect ratio" of film (1.85 units wide for every 1 unit high) - a lot wider than high - added to the mystique and desirability of the "silver screen."
At first, video tape used by the TV networks was 2-inches wide and ran off of huge rolls at a speed of 15 inches per second. Then, in 1976, 1-inch broadcast video tape came into use, gradually easing out the 2" format after a 20-year reign. By the mid 70's, a 3/4-inch video format that had been originally designed for consumer convenience in "U-matic" cartridges was adopted by broadcasters and serious videographers. In 1975 and 1976 two competing consumer video formats were introduced in Japan and a war began between Betamax and VHS tape - both only 1/2-inch wide, but now stored in handy "cassettes." Even though Betamax was higher quality, VHS won out because Sony allotted a running time to Betamax of only 60-minutes, whereas VHS ran for 120 minutes - the average running time of a contemporary Hollywood feature film. Beta, however - in high performance broadcast versions - found use with professionals for many years afterward and is still used to this day.
But all the while this intense competition was going on, there was really just one revolution: the competition between chemical-based image technologies and electronic-based image technologies. We can only begin to reclassify this competition as a "revolution" when we place it into perspective of what was to come next and how the chemical era is now being completely replaced by the electronic era.
THE ANALOG vs. DIGITAL REVOLUTIONThe full revolution from chemicals to electronics didn't, and couldn't, manifest itself until after the analog age merged into the digital age. In short, analog image capture and display is a technology that NEVER had the potential to out-compete the silver halide image of the mechanical motion picture device. And for years, even decades, filmmakers knew this "in their bones" and that "video was coming" even though it - like Artificial Intelligence and plasma fusion - never quite seemed to arrive.
But the allure of inexpensive tape prompted - and enabled - many filmmakers to shoot features in the video format, even though they knew the image quality would be inferior; there would probably be no theatrical release; and a stigma would be placed upon them by the wider community. One such feature that suffered these inevitabilities was called OVER EXPOSED (renamed to SNAP SHOT BLUES). This 93-minute feature was originated on 3/4-inch tape, posted on 3/4inch tape and delivered on 1-inch tape. Even though the film received distribution and sold a number of foreign markets, it was never picked up by an American distributor for the simple fact that it was not available in 35mm film. Today over a million people from all over the world have screened this film over the Internet, where the origination medium plays a more junior role to story and content.
Like movies, the music industry started out as "chemicals," shellac, then vinyl records, and then went to "electronics," bifurcating into analog tape and then digital CD technologies. But this was audio, a human sense perception that required only a fraction of the data rate and bandwidth as sight. Again, the filmmakers of the world watched sound move into a digital universe of mind-boggling dynamic ranges and noise-free clarity, but were left out of the party - film continued to look better than tape. Movies on film had a feel, a story-time presence that video tape simply couldn't match. Nevertheless the feeling that "video was coming" persisted throughout the 80s and 90s.
And as computer processing speeds and data storage devices improved over those decades, something amazing happened: Video technologies began slowly following audio technologies. The hope that "video was coming" began to have new meaning as the Industry's problem, like the electronics revolution itself, bifurcated more than ever into two distinct problems:
The FILM LOOK Problem and the POST PRODUCTION Problem.
No doubt, the electronics revolution opened the door to video tape editing, solving the post production problem; that is, the slow, tedious manipulation of picture on film and sound. Video tape editing - even on 3/4-inch U-matic systems - made Rivas splicing and trim bins obsolete. Starting around 1985, most of the industry's filmmakers began to migrate from film editing to tape editing. It was becoming evident that, whether one originated on film or tape, post production went faster and better in the electronic universe. This was great! The only problem was: Posting a "film" on tape was very expensive and extremely high-pressure. Instead of renting a room by the month in some decrepit Hollywood warehouse, along with a compliment of cheap editing equipment to leisurely "cut a picture," editors found themselves in high-pressure "edit suites" being charged $50, $100, $300 per hour depending on whether they were "off-line" or "on-line." In this era, any money saved by originating on tape was easily sucked up by "posting" on tape. Thus, was video really coming?
As tape - analog and digital - continued to NOT look and feel like film, post production continued to be expensive and high pressure. As tape migrated from analog to D1, then D2 and so on for many Ds, only the biggest producers were able to afford the cameras to run such digital tape and the edit systems to edit their footage.
Analog and digital cameras & edit systems continued to battle for supremacy during this revolutionary period. Analog cameras which relied on heavy tape decks began giving way to smaller decks called "port-a-packs" and then incorporating the actual "port-a-pack" into the camera itself. These became the first "camcorders" and this analog technology filtered down to the consumer and prosumer levels completely overwhelming the Super 8 film industry.
Suddenly an entire crop of USC- and NYC-type film students were shooting their thesis films on Sony camcorders rather than Elmo 1012s cameras. But this wasn't the only reason for the youth migration from film to tape. In 1979, the price of silver jumped 712% (from $6 an ounce to $49 an ounce) because Nelson and William Hunt, sons of Texas oil billionaire, Haroldson Hunt, Jr., attempted to corner the market. Kodak immediately raised the price on a 50-foot cassette of Super 8 film from $5 to $10, citing the high price of silver as their reason. But then when the silver crisis was over and the price of silver dropped back to $6 per ounce in 1982 (and even $4 per ounce by 1991), Kodak kept the price of Super 8 film at $10 per cassette, failing to cite corporate greed as their reason.
Eventually Kodak raised the price of Super 8 to $15 and even $20 per 50-foot cassette and film students - Kodak's future 35mm customers - jumped ship as video camcorders eagerly ravaged the market.
So greed and technology, if not executive stupidity, are responsible for a major part of the film to electronics revolution. But as this revolution, and even the analog-to-digital revolution, were proceeding, a separate revolution was happening.
THE LINEAR vs. RANDOM ACCESS REVOLUTIONAs the revolutions between film and electronics battled at the set and editing room levels AND the competition between analog and digital camera technologies continued, yet ANOTHER revolution developed in the editing room that was neither analog or digital. This revolution was concerned with the very essence of image manipulation itself and gave way to all the other revolutions about to come.
As discussed above, the capture and storage of image started out as a chemical phenomenon. A plastic base had a "film" of the chemicals - an "emulsion" containing silver halide particles - spread on it to dry. When dry, the base, with this emulsion of silver halide particles on it, was cut into long strands, 35 millimeters wide, and perforated longitudinally. This was then coiled up into rolls, boxed and shipped to movie producers as simply "film stock." Later, the base, instead of being coated with a silver halide emulsion was sprayed with an emulsion of ferric oxide, a magnetic compound. It was the exact same idea as silver halide film in that it's all mechanical and chemical, but with the added difference that the resultant image was produced INDIRECTLY. The "film" spread on the plastic base thus became known as "video tape" and this tape carried NOT an actual image, but only an electromagnetic REPRESENTATION of that image embodied as INFORMATION or "data."
Data is the true essence of the chemical vs. electronic revolution; however, as mentioned above, it cannot be fully appreciated until one groks the difference between actual information and represented information.
And the place this difference became most evident was in the editing room. Even though both film and video tape "carry" images, film represents image directly (through projection) and tape represents image indirectly (through data). But since all movies are made in a linear fashion - in that the film or tape runs through the camera in a continuous motion - the true differences between directly and indirectly represented images cannot be appreciated on the set. Only in the editing room can they be appreciated because the editing process depends more on the RANDOM access of data than on the CONTINUOUS access of data. In short, no matter what the quality or "look," electronic data is much easier to access randomly through electronics than in a linear fashion over physical systems.
Realizing this, major filmmaker George Lucas developed the "Edit Droid" - billed as the first random access electronic editing system. Unfortunately the system had two basic problems: It cost about $700,000 and it was a based on a physical data system, not a truly random access electronic data system. The Edit Droid of 1985 was thus loaded with six identical tapes of the show to be cut. Each tape was time coded. As the editor selected a shot, a computer in the Edit Droid cued up the first tape to the start and end points of the selected shot. When the editor selected the second shot, the computer cued up the SECOND tape to the start and end points of THAT shot. This process would continue for the third, fourth, fifth and six shots. Then as the editor played back the "edited" show, the first tape, being cued up, would play the first SHOT, and then, at the exact endpoint of the first shot, the second TAPE would seamlessly "insert edit" the start point of the second SHOT. This would proceed through all six tapes, and while tapes 5 through 6 were playing, tape 1 would re-cue the 7th shot, as previously programmed by the editor. Then the 8th shot and 9th shot, etc., would also be cued up as the player played each of the proceeding tapes.
So the system continuously cued while it played. In this manner, the Edit Droid was said to randomly access the movie's footage and assemble a "cuts list" based on the time code of each shot being assembled into a finished, edited show.
Thus, with the Edit Droid, we have a unique and ingenious methodology for blending the PHYSICAL manipulation of tape with the ease of accessing data off that tape. But bear in mind, this system is all analog. All this happened BEFORE digital tape was fully developed. The electronic image was stored as an analog signal, not a digital signal. Given this, the Edit Droid cannot truly be called a "random access" system. And Lucas probably knew this and this is why he developed his next version of the Edit Droid.
The next version of the Edit Droid replaced the 6 linear tape transports with 2 laser disc transports. The idea behind this was true random access. A laser beam can access data on a disc randomly in that one doesn't have to wind down the entire reel to find a particular shot. One can just drop the laser "needle" on the desired shot and have the shot playback instantly. This speed of access made it possible to replace the 6 tapes with 2 laser discs.
Thus the second generation Edit Droid can be said to be the gateway technology that led to today's electronic editing systems, now called "Non-linear Editing" or simply NLE. The fast dual laser discs of the Edit Droid were simply replaced with a single, ultra fast computer hard drive. With access time being measured in milliseconds, a computer hard drive can deliver the true "random access" picture in a seemingly continuous flow.
With, and as a result of, the computer revolution, the twin revolutions addressing FILM LOOK and POST PRODUCTION proceeded well into the mid 2000s.
The last person to edit a feature film on a 35mm Movieola is said to have been Steven Spielberg. When asked about this, he replied that it gave him "more time to reflect on shots." This sentiment can be respected, but today NLE has totally inundated all aspects of the motion picture industry and is now the standard. Three major competitors - AVID, Final Cut Pro and Premiere Pro - dominate the professional field. All are software-based and run on PCs, MACs and/or UNIX systems.
Even though the electronic revolution started out as analog, it showed great promise for editing. When analog gave way to digital it became even more apparent that image representation of data was much more useful in the post production universe than in the production universe.
Indeed, image capture technology lagged post production technology until about 2001 when ONCE UPON A TIME IN MEXICO became the first well-known feature to be shot in high-def digital video. This was made possible by the Sony HDW-900 camera, which George Lucas helped develop. In this way, Lucas was instrumental in culminating the "film look" revolution AND the "post production" revolution. This, among other reasons, is why he deserved to get a Thalberg in 1991. But James Cameron, who has taken the torch to new realms - known as CGI - deserves the Thalberg in 2014.
THE CGI REVOLUTIONOnce computers became a serious factor in the production of motion pictures, new realms of cinematic existence became routine for the "silver" screen through a technology known as CGI (Computer Generated Imagery).
Even though the screen is still silver - because the Industry has not universally migrated away from 35mm projection - a number of "camps" have developed for the production/post-production work flow.
Camp A likes shooting in film (16mm, 35mm, 65mm) and projecting on film (16mm, 35mm, 70mm). To them, there is nothing better than the soft-toned, grain-defined "immediacy" of the "film look." And there is great merit to this, as film does look great.
On the other hand Camp B likes shooting on digital media. To them, there is nothing better than the versatility of the digital image, an image that can be given soft-tones, grain, "immediacy" and even the "film look." And there is great merit to this, as digital tape can look great.
But then there is Camp C, a camp that appreciates and makes use of both the film universe AND the digital universe. This group likes to originate in digital media, edit in digital NLE but then transfer the tape to 35mm film for release.
Still there is Camp D. This camp also makes use of both film and digital universes by originating in film, transferring to digital media for editing, and then releasing in film OR digital.
Recently, theaters have been converting to digital release, tossing out film projection altogether. This is actually yet another revolution as movies released digitally look great when projected with today's digital projectors. This revolution opens the door to exciting possibilities for independent producers getting theatrical exhibition because expensive 35mm release prints (about $2,000 each) can be replaced with inexpensive DVDs and hard drives.
Thus we can see there are revolutions within revolutions and battles going on during and around the revolutions. Even though Non-Linear Editing is the way to go for most, the Industry divides religiously over film or digital for various aspects of the production process.
The cinematographer Lee Garmes, after shooting a short named WHY on tape, alarmed the film world in 1972 by saying: "I hope I never see another piece of film" in the pages of American Cinematographer magazine. In other words, had Lee Garmes seen a RED digital camera before he passed away, he probably would have said goodbye to Panaflex.
But even though the digital domain makes CGI possible, the CGI revolution is not about film vs. tape or even digital vs. analog. It's about practical effects vs. digital effects. In other words, do you crash a real car on the set or generate a fake "car" in the computer and "crash" that? Do you build 18th century New York on a set or do you build it in a computer application like Maya?
Although CGI has shown us worlds and actions never before possible on the screen, many are becoming weary of the constant unreality of computerized images. When we see a man jump across a chasm that's impossibly long, we fail to suspend our disbelief, as the philosopher-poet Samuel Coleridge might have said. We know in our gut that the stunt we just saw was impossible even though it looked smooth and flawless. We thus lose interest in the story and the character because the story is developing out of "events" that are not possible.
Even though CGI has opened up wonderful worlds that many of us have enjoyed and marveled over during the past decade or so, their improper or gratuitous use can negate their effectiveness. This revolution is going on in many minds at this time and is one of the main determinants of theatrical audience attendance. In short, a movie cannot just be a display of imagery, it must above all be a well-told story with compelling characters. CGI helps or hinders these objectives.
THE ASPECT RATIO REVOLUTIONWhile the above-discussed revolutions proceed, there are a number of smaller technical revolutions going on, even one that Euclid may have started thousands of years ago. The "aspect ratio" war could be said to have started with the Ancient Greeks and their "golden rectangle." The golden rectangle is a rectangle such that, when a square area is removed, the remaining area is another rectangle with the exact same aspect ratio as the first. The term "aspect ratio" is just a fancy way of comparing how wide (horizontal) a picture frame is to how high (vertical) it is. The aspect ratio of the golden rectangle is 1.618 to 1. That means it's 1.618 units wide for every 1 unit high.
But not everyone agrees that the golden rectangle has the most "pleasing" aspect ratio. Thus, the motion picture Industry has been battling over this for over 100 years and every manner of aspect ratio has been used at one time or another.
In 1892 Thomas Edison - inventor of the motion picture camera - ignored the Ancient Greeks and established an aspect ratio of 1.33 to 1 because he felt it mimicked the human eye's visual angle of 155-degrees wide x 120-degrees high. Later the TV industry adopted pretty much the same aspect ratio, giving us those boxy looking TV sets which later became boxy looking computer monitors in the 1970s.
But after TV started challenging the movie Industry, Hollywood suddenly realized the Greeks were onto something and "widescreen" cinema was born.
At first, 35mm frames were simply horizontally cropped on the top and bottom, thus giving a wider image, and wasting much of the picture area. This resulted in aspect ratios from 1.66:1 to 2.0:1 - 1.85:1 being most prevalent and even used today, mostly on low-budget films.
But as far as the studios were concerned, cropping was not the answer. Thus, cropping gave way to squeezing a wider image on to 35mm film using a new and expensive technology called "Anamorphic lenses." The most successful processes were CinemaScope and Panavision and these produced aspect ratios from 2.35:1 to 2.40:1.
Then the Industry got the idea of using more of the picture area on 35mm film, thus Super 35, VistaVision and ArnoldScope were born. Super 35 simply increased the image size on 35mm film but VistaVision, developed by Paramount, actually ran 35mm film horizontally through the camera and the projector, taking advantage of image re-orientation. Arnoldscope was MGM's version of this horizontal projection process.
Then - in further pursuit of the "golden rectangle" we can suppose - the actual size of film was changed. A process known as Todd-AO introduced 65mm negative that was then printed onto 70mm release prints. This process was later used for CinemaScope and probably the most notable film that came out of it was OKLAHOMA! Although CinemaScope is gone, 70mm projection survives today as IMAX.
But just when the world thought the "wide screen wars" were over, yet more processes came out of Hollywood. Next was Anamorphic 65, squeezing an image not just on 35mm film, but onto 65mm film and then projecting it at 70mm. Thus arrived Ultra-Panavision and Camera 65, MGM's version of the 70mm Anamorphic process with BEN-HUR shot at an aspect ratio of 2.76:1.
But when cropping, squeezing, reorienting, and up-sizing failed to be enough, the "Aspect Ratio Revolution" continued with yet more processes: Cinerama, Polyvision and even a Soviet process known as KinoPanorama. These processes had the gall to increase the number of cameras and projectors used in the capture and display of the widescreen image. Cinerama synchronized 3 cameras side-by-side and then exhibited each camera's footage with 3 side-by-side projectors so that the resultant aspect ratio of the image was 2.89:1. Polyvision, got a wider 4.1:1 aspect ratio by doing the same with three 1.33:1 images. Later, CinemaScope upgraded their process to originate an Anamorphic image on 70mm film whereby such image was divided into 3 by an optical printer and exhibited by 3 synchronous projectors.
In the end, the most practical processes, and the ones that survive today, are the 1.33:1 Academy frame cropped to 1.85:1 and Cinemascope, the process whereby a widescreen image is anamorphically squeezed onto an Academy frame and then unsqueezed upon projection, giving an aspect ratio of at least 2.35:1.
While the movie industry has pretty much adopted the 1.85:1 and 2.35:1 aspect ratios - again meaning 1.85 units wide for every 1 unit high and 2.35 units wide for every 1 unit high - modern video screens seem to have settled down to a widescreen aspect ratio of what they insist on calling 16:9. 16:9 is another way of saying 1.78:1. Hopefully these three competing aspect ratios will eventually settle down to one acceptable universal widescreen aspect ratio.
The current industry doesn't even call aspect ratios by the same terminology. As noted above, the digital video world calls a 1.78:1 aspect ratio "16:9," presumably because 16:9 is easier to remember than 1.78:1 (derived by dividing 16 by 9). But the standard aspect ratio for 35mm film, set by the Academy of Motion Pictures, of 1.33:1 is also called 4:3 (derived by dividing 4 by 3). All this can be very confusing to anyone who is not a mathematician.
But we have so-called NTSC analog TV standards as well, and these are often delineated in "scan lines." Thus a "frame" of TV is composed of 525 horizontal scan lines, of which only 483 can be seen as the "visible raster." The remainder of lines (the "vertical blanking interval") are used for technical housekeeping functions such as synchronizing the picture and allowing CRT-based displays time to retrace the scan from the bottom to the top of the screen.
To make matters even more complicated, digital TV standards, (i.e. "video") are delineated in "pixel" units of width and height. For example, 1280 x 720 means the picture is 1280 pixels wide and 720 pixels high. But this is misleading because "pixels" are sometimes square and sometimes rectangular. Thus, an image with an aspect ratio of 16:9 has 1280 x 720 SQUARE pixels, whereas it has 1024 x 768 RECTANGULAR pixels.
We have called this an "Aspect Ratio Revolution" but it might be better named the "Aspect Ratio War" because it's been going on since the Greeks. Given this, we suggest the technology world settle on a widescreen aspect ratio of 2:1. This aspect ratio is derived by averaging the three most popular widescreen aspect ratios over the last century: (1.85 + 2.35 + 1.78) / 3 = 1.99. Rounded off, 1.99 becomes 2.00 - to make it easy to remember. If agreeable, all widescreen - for film, tape, video, TV, computer monitors, mobile devices, etc. - could simply be defined by stating: "the screen is twice as wide as it is high." This is simple to understand, accommodates the range of aspect ratios that have been suggested and tried over the past 2,500 years and all modern digital equipment can easily adapt to it.
A universal widescreen aspect ratio would be a worthwhile standardization because the incessant machinating over a movie's height and width causes incredible hassles with the importing and exporting of digital images, not to mention the conversion from film to video, vice versa and countless other problems.
THE RESOLUTION REVOLUTIONAkin to the aspect ratio revolution we have the resolution revolution. This is actually a series of revolutions because resolution, unlike standardizing an arbitrary aspect ratio, is actually a part of image quality itself. Image quality consists of "sharpness" (degree of focus), "latitude" (range of shades possible between pure white and absolute black), "exposure" (overall luminosity), "color saturation" (how deep or present the colors are), "color balance" (true-to-life representation of the spectrum), "contrast" (range of shades used), and last but not least, "resolution" (how much detail an image can hold).
While all of the image quality factors are endlessly addressed in the digital world - and even over-complicated in photo touch-up applications, editing applications and every other image application - resolution is probably the most challenging.
The reason resolution is challenging is because it hits at the heart of information technology itself. When we say that resolution is "how much detail an image can hold," we are really talking about how much information the image can hold. This translates into how much information the "file" storing the image holds, and gets into the matters of "memory" and "compression," the act of making less memory do more storage.
The concept of resolution gets easily entangled with the concept of aspect ratios, hence many people have no idea what 1080 at 30fps means as opposed to 1080 at 24 fps (let alone 720 at 30 fps vs. 1080 at 24 fps). Then we toss in the idea of "interleaved" scan (i) and "progressive" scan (p) and someone feels like jumping off a cliff. Relax. In brief, this is how the pixel dimensions (also called "display resolution") relate to resolution:
A file with more data can draw an image with more pixels. More pixels means more resolution because resolution is detail created by pixels. Hence more pixels require more data storage or larger files. But a larger files takes longer for a computer to read, thus each file making up a frame of picture can not be propagated as quickly unless increased computing power is available. Since standard motion picture propagation is 24 frames per second (fps), the same amount of computing power can give us either more frames with less data or less frames with more data in a given unit of time. Looked at another way, if the propagation speed of a movie is held constant at, say, 30 fps, an image that is 720 pixels wide will look better than an image that is 240 pixels wide. It will look better because the latter image contains more data, more pixels, than the former.
As computers become faster (referred to as "mips," "bps," "speed," "data rate," "processing power"), AND the transmission rate of the Internet increases (usually misnomered "bandwidth," an analog term) resolution of images increases. In essence, this means that MORE information can be presented to the viewer's mind in a shorter interval of time. Again, this basically means more pixels per second. Those pixels can be presented in a small area on the screen or in a large area. If they are presented in a large area, it will take MORE of them to give the same image quality (resolution) when viewed at a given distance. If they are presented in a small area, it will take LESS of them to give the same image quality (resolution) when viewed at a given distance. If the distance to the screen increases, the size of the image decreases, thus, in order to hold the image size constant, one must INCREASE the image size with distance. But in order to do this one must also increase the resolution of the image if the "image quality" is to remain constant - and to do this the image file must be larger, with more pixels, or it must propagate at more frames per second.
The standard propagation speed of film is 24 fps and the standard for TV is 30 fps. The reason film can run slower is because, until recently, each frame of film contained more (chemical) image information than each frame of TV contained (electronic) data. Thus, TV ran faster in an attempt to make up for the deficiency. The fact that AC electric power in the United States was produced at a frequency of 60 Hz (cycles per second) was also a major factor. In fact, the TV did not paint 525 continuous "scan" lines on the face of the picture tube for each TV frame, it painted 262.5 scan lines twice for each TV frame. First, the odd-numbered lines were 'drawn' on the screen from top to bottom. Then, the even-numbered lines were scanned between the odd-numbered ones. Thus the 262.5 odd-numbered lines "interleaved" with the 262.5 even-numbered lines, creating a complete image every 1/30 of a second. And after each interleaved image is created, the next is created in the same way. Each complete image being a TV "frame," there were 30 complete frames produced every second or what one could consider 60 "fields" completed every second. This is thus why we call video standards 30/60 NTSC.
Thus when we went from the 720i to 1080i, and then had the 720p to 1080p revolution, what that meant, other than more sales for Circuit City and Best Buy, was the transition from INTERLEAVED scanning to PROGRESSIVE scanning: "i" being short for interleaved and "p" being short for progressive. Progressive simply means NOT interleaved. Progressive scanning means that the electron beam of a TV scans the complete frame in only one pass and does NOT skip any lines. It just moves from one line to the next progressively.
Apparently, progressive scanning has higher resolution, and this is why the industry has gone to 1080p and the pro industry has gone even higher, to 2000p and 4000p, what they call 2K and 4K.
To make matters more confusing, the pixel dimensions for different display resolutions are inconsistent. To wit:
720p is an image 1280 wide x 720 high and 1080p is an image 1920 wide and x 1080 high. In both cases the term "720p" and "1080p" are derived from the HEIGHT of the image. But when we go to higher resolutions, look what happens. An image of 2048 x 1152 is called "2K." An image of 2880 x 1620 is called "2.8K." An image of 3072 x 1728 is called "3K" and an image of 4096 x 2160 is called "4K." In other words the image WIDTH is suddenly used as the naming convention rather than the image height. More confusion? Hey, it's just part of the battles and revolutions.
THE INPUT DEVICE REVOLUTIONWhat is an input device? An input device is anything one uses to provide input to a computer. In basic order of invention these are: the keyboard, the "mouse," the "wiggle stick," the "touch pad," the "touch screen," and voice activation. And, of course, Steven Hawking and Bill Gates probably have even more input devices at their command.
The input device revolution centers mostly around the keyboard vs. the mouse. At first, all computers operated with "command line prompts." In other words you had to actually type in the command for what you wanted the machine to do on a line at the bottom of the screen. Then, one day in 1968, the "mouse" invaded the computer landscape and suddenly one didn't have to type so much, they only had to point and click.
Both the mouse and the keyboard have been steadfast companions, each having its purpose and undergoing improvements over the years. The early mouse operated by a rotating ball moving over a pad. Then the first generation "optical mouse," needing special striated pads, came along. These were less than accurate, so everyone went back to the ball-assisted mouse. These then improved with right and left buttons and even a middle button. The romance with middle buttons, however, soon gave way to the more useful scroll wheel. Then, just when the Logitech corporation improved the mouse to the point that it could get no better, it actually DID get better when they introduced a new generation of optical mouse with mind-boggling accuracy. To date, there has never been a better optical mouse than the Logitech B100 wheel mouse (manufacturer's number: M-U0026). But in their haste to make ever greater profits, they have discontinued this mouse and started producing all manner of "mice," some acceptable, but most over-produced.
As the keyboard matured, it became less like an input device and more like the control console of a jet airplane. New keys appeared, even buttons and dials for audio levels. Some light up, some are ergonomic, most are now USB compatible. But now there's a retro-trend to mimic the keyboards that IBM originally put out, such having a mechanical and tactile click. Yes, keyboards became so cheap, they literally piled up in many of our attics, garages, closets and basements for most of the 1990s and 2000s. Now the trend is super expensive keyboards, boards that can cost over $100 - but boy are they cool - and perfect for the gamer that's tired of work.
THE MOBILE REVOLUTIONAnd speaking of being tired of work, who wants to be saddled to a PC in an office these days? Thus, enter the laptop/cell-phone/ipad/iphone revolution where NO ONE is at a desk anymore. And with this revolution - what can be called the "Mobile Revolution" - has come the other input devices we touched on above. Most prevalent is the touch pad and what we will call the "mouse stick" - you know that obnoxious little stick that you use to move the pointer around? The problem with the keyboards, touch pad and mouse stick on laptops is they are all insanely impractical for serious work. Sure, the constant traveler with endless time on his hands; the millionaire that never has to work; and the teenager endlessly looking to "hook up" all love these portable devices and some have managed to even do productive things on them. But to do really serious work, the PC based in an office or the home, with a full-sized keyboard and a good optical mouse, is the only way to go.
Nevertheless, the mobile revolution, influenced by the input revolution, continues, now moving into the "touch screen" area and "voice-activated" applications. Soon computer input and processing power will (hopefully) enable such mobility we will be no different than the beings on STAR TREK. That said, the next revolution is well underway.
THE CONNECTIVITY REVOLUTIONThe connectivity revolution is based on the battle between "WiFi" and "cables."
Most agree that cables are not only ugly, they are a pain because every cable has a different termination connector that must be accommodated and purchased. There are scores, if not hundreds, of connectors, and no doubt profit plays a significant part in this mess.
First, monitors were connected with CGA, then EGA, then VGA, then DVI, and now HDMI. Each had a specialized male and female connector. Then we had parallel and serial ports connecting printers and scanners. Now they are all mostly USB. Cameras had firewire ports that have now given way to mostly USB 3.0 and HDMI. And as far as video, we had "composite" video, then "S-video," then "component" video, not to mention BNC and other professional connectors. Now, most video is carried by HDMI and HDMI is great, but this is used mostly for output to devices like monitors.
For now, it seems input and output devices are centering around two standards: - HDMI and USB 3.0. Let's hope it stays with these ports or everything migrates to wireless technology, what's known as WiFi.
The good thing about the WiFi revolution is devices are starting to become reliable and easily installed. Wireless Internet connections and printers now work as well as the wireless keyboard and wireless mouse. If this trend keeps up, we may see a totally wireless civilization emerge. After all, isn't one of the "duties" of a computer to "compute." If so, why can't engineers relegate computers to computing what their wires' connectivity should and must be? In the wireless universe this is possible because connectors are not needed.
Unless someone comes up with a "universal wire" - a wire that can connect all devices to all other devices, WiFi should make continued strides - even into peer-to-peer, walkie talkie-like, free phones. Eventually a universal wire may be plausible. Such a wire might consist of hundreds of bundled nano-engineered wires, and as certain bandwidth and data-rate requirements are stipulated by computers between devices, the universal wire configures automatically handle the loads. A universal wire of say 100 strands might allocate 80 of those strands to carrying video and 10 to each channel of stereo audio.
THE CENTRAL vs. DISTRIBUTED REVOLUTIONThe connectivity revolution is really opening the door to a battle that has been with Humankind for thousands of years, a battle that goes by different names. Some call it central intelligence vs. distributed intelligence or mainframes vs. personal computing. It can be called proprietary software vs. open source software. It can even be applied to the political sphere as F.A. Hayek did when he warned that central planning of the state violates individual initiative and inevitably leads to totalitarianism.
In the technology realm we see the various mentalities of central vs. distributed intelligence vying for supremacy all the time. The current battle is known as "cloud computing." The idea behind cloud computing is similar to the idea behind central planning of a soviet country: An elite of politicians (in this case computer programmers) plan and engineer the economy (or your local PC) from a distant central location (headquarters of the Adobe or Microsoft corporation, for instance).
So, first, we had mainframe computers where the central computer was very intelligent and the terminals were dumb. Now we have a situation where the terminals are very smart and there is no mainframe. What the advocates of cloud computing are now proposing is that we once again make the terminals dumb and the network smart - specifically the mainframe computers ON the network smart. This leaves the individual out on the network with little or no personal intelligence because under cloud computing the individual only RENTS intelligence, never OWNS it. Thus the entire concept of cloud computing falls nicely into the political rubric of the socialist mentality.
SUMMARY AND CONCLUSIONGiven the fact that at least ten technological revolutions - some of them full-blown wars - are proceeding, many simultaneously, it's no wonder the world is in a state of shock and confusion. Sure, most of these revolutions, and the attendant advancements, are wonderful, and they have enabled all of us. And these wars and revolutions may usher in the Singularity, that time when computer intelligence supersedes human intelligence. But if we are to really benefit from all this confusion in the meantime, we must consolidate our gains and prune out the unnecessary technologies. We must curb our tendency to over-diversity for profit sake and seek to invest in FEWER but BETTER technologies. And then we must STANDARDIZE these technologies and move on to develop yet additional TOTALLY NEW technologies. And by new technologies we are not talking about mere refinements, as has been the case for the past 40 years. We are not talking about better mouse traps or improved widgets; we are talking about entirely new, never before invented or discovered technologies. And if there simply ARE no new technologies that can be invented or discovered - no new wheels, fires, airplanes, radios, nuclear energy or nano-engineered materials - then we must get off the planet and seek newness in the infinite and open universe. Starting with the exploration and colonization of the solar system, humanity must take these steps personally, not with mere robots. Cowards send robots. Real men and women go themselves.
The ten technological revolutions that we have summarized here are well-known. Other than the esoteric particulars of engineering specs and other such minutia, the concepts behind all of the technology discussed are simple and straight forward. It should therefore be a simple and straightforward task to weed out all the unnecessary technologies and migrate towards the most practical, economic and high-quality technologies possible.
This is no place or time for greedy, opportunistic corporations to shave off yet more grams of plastic; fill the box with yet less product; build in obsolescence sooner; or hollow out and spray fake metal more often. These are the ploys of pathetic shysters, not the contributions of those that build civilizations. These are the actions of executives, stockholders, government officials and other mentally-challenged entities that fail to understand that the infinite universe is there for us. It's even big enough for them. And big enough for any number of non-zero sum games that are sure to manifest as humanity properly develops, controls and passes through its technological revolutions.
Originated: 08 December 2013
Revised and Supplemented: 09 December 2013
Revised and Supplimented: 11 December 2013
Supplemented: 14 December 2013
Please forward this to your mailing list. The mainstream media may not address this subject because they have conflicts of interest with their advertisers, stockholders and the political candidates they send campaign contributions to. It's thus up to responsible citizens like you to disseminate important issues so that a healthy public discourse can be initiated or continued. Your comments and suggestions are welcome and future versions of this research paper will reflect them. Permission is hereby granted to excerpt and publish all or part of this article provided nothing is taken out of context. Please give reference to the source URL.
Any responses you proffer in connection with this research paper when emailed or posted as an article or otherwise, may be mass-disseminated in order to continue a public discourse. Unless you are okay with this, please do not respond to anything sent out. We will make every effort, however, to remove names, emails and personal data before disseminating anything you submit.
Don't forget to watch our documentary films listed below so you will have a better understanding of what we believe fuels most of the problems under study at Jaeger Research Institute. We appreciate you referring these documentary films to others, purchasing copies for your library, screening them for home audiences and displaying them on your public-access TV channels. The proceeds from such purchases go to the production of new documentaries. Thank you.
If you wish to be removed from this mailing list go to http://www.jaegerresearchinstitute.org/mission.htm but first please be certain you are not suffering from Spamaphobia as addressed at http://www.jaegerresearchinstitute.org/articles/spamaphobia.htm
SOURCE URL
http://www.JaegerResearchInstitute.org
| FIAT EMPIRE | ORIGINAL INTENT | CULTURAL MARXISM | CORPORATE FASCISM | SPOiLER | MOLON LABE |
Mission | Full-Spectrum News | Books & Movies by James Jaeger | Sponsor |
Jaeger Research Institute