Americas Space Shuttle: Reaction Control System NASA Astronaut Training Manual (RCS 2102A)


In my travels during the research phase I was privileged to meet and work with a large number of NASA and contractor personnel. Those listed in the bibliographic note as granting interviews usually shared rare materials from their files as well. Some were asked to do technical reviews of individual chapters or sections of chapters to help eliminate as many errors of fact and interpretation as possible. At each site individuals opened doors for me and found office space where none was available. Frank Penovich of Kennedy was especially helpful in ob- taining a tour of the Shuttle facilities.

The SEI was kind enough to permit use of their equipment to assist in preparing the final drafts of the manuscript. My assistants Katherine Harvey and Suzanne Woolf did yeoman work editing and formatting the text for laser printing. My thanks also goes to my wife, who lovingly never let me give up. A final, required, word from our sponsor: Expensive to purchase and operate, the giant computer needed a small army of technicians in constant at- tendance to keep it running. Within a decade and a half, NASA had one of the world's largest collections of such monster computers, scat- tered in each of its centers.

Moreover, to the amazement of anyone who knew the computer field in , NASA also flew computers in orbit, to the moon, and to Mars, the latter machines running un- attended for months on end. Within another 10 years the giant ground- based mainframe would be supplanted by clusters of medium-sized computers in spaceflight operations, and the single on-board computer would be replaced by multiple machines. These remarkable changes mirror developments in the commercial arena. Where there were giant computers, small computers now do similar tasks.

Where there were no computers, such as on aircraft or in automobiles, computers now ride along. Where once the only solution was the large, centralized computing center, distributed computers now share the load. Since NASA is well known as an extensive user of computers — mainly because spaceflight would not be possible without them — there is a common sense that at least part of the reason for the rapid growth and innovation in the computer industry is that NASA has served as a main driver due to its requirements.

Actually, the situation is not so straightforward. In most cases, because of the need for reliability and safety, NASA deliberately sought to use proven equip- ment and techniques. Thus, the agency often found itself in the posi- tion of having to seek computer solutions that were behind the state of the art by flight time.

However, in other cases, some use of nearly leading edge technology existed, mostly for ground systems, but oc- casionally when no extensively proven equipment or techniques were adequate in a flight situation. This was especially true on unmanned spacecraft, because the absence of human pilots allowed greater chances to be taken.

Thus generalizations cannot be made, other than that there was no conscious attempt on the part of NASA in its flight programs to improve the technology of computing. Any ways in which NASA contributed to the development of computer techniques were side effects of specific requirements. NASA uses computers on the ground and in manned and un- manned spacecraft. These three areas have quite different require- ments, and the nature of the tasks assigned to them resulted in varying types of computers and software. Thus, the impact of NASA on com- puting differs in extent as a result of the separate requirements for each field of computer use, which is one reason why the three fields are considered in separate parts of this volume.

Computers are an integral part of all current spacecraft. However, Mercury, the first manned spacecraft, did not carry a com- puter. Fifteen years of unmanned eardi orbital and deep space mis- sions were carried out without general-purpose computers on board. Yet now, the manned Shuttle and the unmanned Galileo spacecraft simply could not function without computers. In fact, both carry many computers, not just one. This transition has made it possible for cur- rent spacecraft to be more versatile.

Increased versatility is the result of the power of software to change the abilities of the computer in which it resides and, by extension, the hardware that it controls. As missions change and become more complex, using software to adjust for the changes is much cheaper and faster than changing the hardware. On-board computers and ground-based computers store data and do their calculations in the same way, but they handle processes and input and output differently. A typical ground computer of the early s, when the first computers flew on manned spacecraft, would process programs one at a time, right after each other.

This sort of processing, in which the entire program must be loaded into memory and data must be available in discrete form, is called "batch. In a batch process, if the computer is doing a calculation, the input and output devices are idle. If it is using a peripheral device, the calculating circuits are not used. One way to improve on efficiency of the batch process would be to develop an operating system for computers that could permit one program to use resources currently unneeded by another program.

Another method is to limit each program to a fraction of a second running time before going on to the next program, running it for a fraction and then going on until the original program gets picked up again. This cyclic, time- sliced method permits many users to be connected to the computer or many jobs to run on the computer in such a way that it appears that the machine is processing one at a time. The computer is so fast that no one notices that his or her job is being done in small segments.

Each of these methods presupposes that data for the program are available and processed, and then the program stops. So even though lots more programs are run through the system in a period of time, each is still handled as a batch process. When the computer runs through all the processes waiting for execution, it stands idle. Spacecraft computers operate in a radically different processing environment.

They are in "real-time" mode, handling essentially asynchronous inputs and outputs and continuous processing, similar to a telephone operator who does not know on which line the next call will come. For example, computers used for controlling the descend- ing Shuttle can hardly process commands to the aerodynamic surfaces in batch mode. The requirement for real-time processing leads to other requirements for spacecraft computers not normally found on earth-based systems. Software must not "crash" or have an abnormal end.

If the software stops, the vehicle ceases to be controllable. Hardware must also be highly reliable, or reliability can be obtained through redundancy. If the latter course is chosen, overhead in the form of redundancy management hardware and software will be high. Memories must be nonvolatile in most applications, so if power is lost then the program in storage will not disappear. Since modern semiconductor, random- access memories are usually volatile, older technology memories such as ferrite core continue to be used on spacecraft.

Weight, size and power are other considerations, just as with all components on a spacecraft. Even though both manned and unmanned spacecraft have similar requirements, until very recently they could not use the same com- puters. No computer with sufficient calculating capability to control the Shuttle flew on an unmanned spacecraft. Conversely, the Shuttle computers are so large and power hungry they would overwhelm the power supply of a deep space probe.

Modem powerful microproces- sors make it possible to overcome these deficiencies, but systems described herein predate most microprocessor technology. Also, com- puters on manned spacecraft are oriented toward relatively short-term missions lasting up to a few weeks which will change in the Space Station and Mars Mission eras. Computers on unmanned earth orbital missions and deep space probes need to run reliably for years, yet must have low power requirements.

Even though both need to be trustworthy, the different mission conditions dictate how reliability is to be attained. NASA's challenge in the s and s was to develop com- puter systems for spacecraft that could survive the stress of a rocket launch, operate in the space environment, and thus provide payloads with the increased power and sophistication needed to achieve in- creasingly ambitious mission objectives. NASA found itself both en- couraging new technology and adapting proven equipment. In manned spacecraft the tendency was to use what was available.

On unmanned spacecraft innovation had a freer hand. In contrast, NASA's ground computer systems reflected the need for large-scale data processing similar to many commercial applica- tions, but in a real-time environment, until recently not normally a re- quirement of business computing. Therefore, commercially available computers could be procured for most of the ground-based processing, with any innovation confined to software that handled the real-time needs.

Preflight checkout, mission control, simulations, and image processing all have used varying combinations of standard mainframe and minicomputers. Some of the software innovations needed on the ground have naturally had greater impact on the wider world than those made for on-board computers.

The techniques of software development learned by NASA while do- ing both flight and support programming have advanced the state of the art of software engineering, which comprise the management and technical principles that make it possible to build large, reliable software systems. Even though the requirements and solutions to computing problems in the manned on-board, unmanned on-board, and ground arenas are different, several common themes bind the three together. In nearly all cases, NASA managers failed to adequately allow for system growth, often causing expensive software and hardware ad- ditions to be made to meet scaled-down objectives.

More positively, recent developments are designed to enable proven computer systems and techniques to fly or support more than one mission, reducing the costs associated with customized solutions. Also, there is a continuing reliance on multiple smaller computers operating in a network as op- posed to large single computers, enabling task distribution and more economical means of ensuring reliability. This last trend also under- scores the dependence on communications that has characterized NASA's far-flung flight operations since the beginning. These themes appear in varying strengths throughout the stories of the individual projects.

Join Kobo & start eReading today

Regardless of NASA's impact on computing, its many uses of computing technology from on provide valuable examples of the growth in power, diversity, and effectiveness of the applications of computers. The late s marked the beginning of the computer in- dustry as an indispensable contributor to American science and busi- ness. NASA's insatiable desire to make the most of what the industry could offer resulted in many interesting and innovative applications of the ever-improving technology of computing. The first manned spaceflight program to use computers continuously in all mission phases was Apollo.

Here mission controllers watch computer-driven displays while astronauts explore the lunar surface after a computer-controlled descent. Mercury, Gemini, Apollo, Skylab, and Shuttie. The latter four programs produced spacecraft that had on-board digital computers. The Gemini computer was a single unit dedicated to guidance and navigation functions. Apollo used computers in the command module and lunar excursion module, again primarily for guidance and navigation.

Skylab had a dual computer system for at- titude control of the laboratory and pointing of the solar telescope. NASA's Space Shuttle is the most computerized spacecraft built to date, with five general-purpose computers as the heart of the avionics system and twin computers on each of the main engines. The Shutde computers dominate all checkout, guidance, navigation, systems management, payload, and powered flight functions.

NASA's manned spacecraft computers are characterized by in- creasing power and complexity. Without them, the rendezvous tech- niques developed in the Gemini program, the complex mission profiles followed in Apollo, the survival of the damaged Skylab, and the reliability of the Shuttle avionics system would not have been pos- sible. When NASA began to develop systems for manned spacecraft, general-purpose computers small and powerful enough to meet the re- quirements did not exist.

Their development involved both commer- cial and academic organizations in repackaging computer technology for spaceflight. It was barely large enough for its single oc- cupant and had no independent maneuvering capability save attitude control jets. Its orbital path was completely dependent on the accuracy of the guidance of the Atlas booster rocket.

Re-entry was calculated by a real-time computing center on the ground, with retrofire times and firing attitude transmitted to the spacecraft while in flight. There- fore, it was unnecessary for the Mercury spacecraft to have a com- puter, as all functions required for its limited flight objectives were handled by other systems.

At first glance, the Mercury and Gemini spacecraft are quite similar. They share the bell shape and other characteristics, partially because Gemini was designed as an enlarged Mercury and because the prime contractor was the same for both craft. The obvious difference is the presence of a second crew member and an orbital maneuvering system attached to the rear of the main cabin. The presence of a second crewman meant that more instrumentation could be placed in Gemini and that more experiments could be per- formed, as an extra set of eyes and hands would be available.

Gemini's maneuvering capability made it possible to practice rendez- vous techniques. The main rendezvous target was planned to be the Agena, an upper stage rocket with a restartable liquid-propellant en- gine that could be launched by an Atlas booster. After rendezvous with an Agena, the Gemini would have greatly increased maneuvering capability because it could use the rocket on the Agena to raise its or- bit. Successful rendezvous required accurate orbital insertion, com- plex catch-up maneuvering, finely tuned movements while making the final approach to the target, and guidance during maneuvers with the Agena.

Safety during the critical powered ascent phase demanded some backup to the ascent guidance system on the Titan n booster vehicle. The Gemini designers also wanted to add accuracy to re-entry and to automate some of the preflight checkout functions. These varied requirements dictated that the spacecraft carry some sort of ac- tive, on-board computing capability.

The resulting device was the Gemini digital computer. The Gemini computer functioned in six mission phases: These requirements demanded a very reliable, fairly sophis- ticated digital computer with simple crew interfaces. IBM built such a machine for the Gemini spacecraft.

By the early s, engineers were searching for ways to automate checkout procedures and reduce the number of discrete test lines connected to launch vehicles and spacecraft. NASA photo S did its own self checks under software control during the prelaunch phase. During ascent, the computer received in- formation about the velocity and course of the booster so that it would be ready to take over from die Titan's computers if they failed.

space structures and support systems - sicsa

Switch-over could either be automatic or manual. Even if the updated parameters were not necessary to boost guidance, they were useful in the calculation of additional velocity needed after the Titan's second-stage cutoff to achieve the proper orbit. Thus, it would be impossible to provide the sort of con- tinuous updates needed for rendezvous maneuvers. That same mission also featured a fully computer-controlled re-entry, which resulted in a splashdown 4.

What is Kobo Super Points?

Actually, the situation is not so straightforward. My assistants Katherine Harvey and Suzanne Woolf did yeoman work editing and formatting the text for laser printing. Due to the nature of core memory, programs and data stored magnetically in the cores would not disappear when the power was off, as in present day semiconductor memories. Good climbing ability Applications: A final, required, word from our sponsor: Data words were always stored in syllables and 1 of a full word, but in- structions could be in any syllable.

In computer-controlled descents, the roll attitude and rate are handled by the computer to affect the point of touchdown and re-entry heating. The Gemini spacecraft had sufficient lift capability to adjust the land- ing point up to miles on the line of flight and 40 miles laterally respective to the line of flight. Five minutes before retrofire, the com- puter was placed in re-entry mode and began to collect data. It dis- played velocity changes during and after the retrofire. One of them, John J.

Lenz, said that the contract for Gemini came just at the right time. The best of the engineering teams of the IBM Federal Systems Division plant in Owego, New York were between assignments and were put on the project, increasing its chance for success. Restrictions on size, power, and weight influenced the final form of the computer in terms of its components, speed, and type of memory.

The shape and size of the computer was dictated by the design of the spacecraft. It was contained in a box measuring An unpressurized equipment bay to the left of the Gemini commander's seat held the computer, as well as the inertial guidance system power supply and the computer auxiliary power supply. However, circuit modules that held the components were somewhat interchangeable.

Locations of key components of the Gemini Guidance System. The computer had no redundant circuits, which meant that failures in the computer canceled whatever activity needed to be con- trolled by it. For example, a failure in the power switch three quarters of the way through the Gemini IV mission caused cancellation of the planned computer-controlled re-entry.

It was possible to fly the Gemini computer without a backup because whatever the computer did erroneously could be either abandoned such as rendezvous or handled, albeit more crudely, in other ways such as re-entry using Mercury procedures. The machine had an instruction cycle of milliseconds, the time it required for an addition.

The computer was serial in operation, passing bits one at a time, which explains the relatively slow processing speeds, slower than some vacuum tube computers such as the Whirlwind. Also, its fixed decimal point arithmetic unit design limited the precision of the calculations but greatly reduced complexity.

The Gemini digital com- puter used ferrite cores for its primary memory. Core memories store one bit in each ferrite ring by magnetizing the ring in either a clock- wise or counterclockwise direction. One direction means a one is stored and the opposite direction is a zero. Each core is mounted at a perpendicular crossing of two wires.

Thousands of such crossings are in each core plane, consisting of rows of wires running up and down the X wires and others running left and right the y wires. Therefore, to change the value of a bit at a specific location, half the voltage re- quired for the change is sent on each of two wires, one in the x direc- tion and one in the y direction. This way only the core at the intersec- tion of the two wires is selected for change. All the others on the same wires would have received only half the required voltage.

By the use of a third wire it is possible to "sense" whether a selected core is a one or a zero. In this way, each individual core can be read. The ferrite core memory in the Gemini computer had a unique design. It consisted of 39 planes of 64 by bit arrays, resulting in 4, addresses, each containing 39 bits. A word was considered to be 39 bits in length, but it was divided into three syllables of 13 bits. The memory itself divided into 18 sectors. Therefore, it was necessary to specify sector and syllable to make a complete address. Instructions used 13 bits of the word, with data representations of 26 bits.

Data words were always stored in syllables and 1 of a full word, but in- structions could be in any syllable. The arithmetic and logic circuit boards and the core memory made up the main part of the Gemini computer. These components in- terfaced to a plethora of spacecraft systems, most of which were con- cerned with guidance and navigation functions.

This system was the Gemini digital computer through the Gemini Vn mission. Beginning with Gemini Vin, the computer included a secondary storage system, which had impact on the spacecraft computer systems built by IBM and flown on the Skylab and Shuttle. During the s and well into the s, the most ubiquitous method of providing large secondary storage for computers was the use of high-speed, high-density magnetic tape.

By , tape was used mainly to store large blocks of data unneeded on a regular basis or to mail programs and data between sites. Cores like these were used in Gemini's memory. IBM photo capacity, rivaling or even exceeding tape, and thus supplanting it in common use. In , disk systems were large, expensive, and far from fully reliable.

When the software for the Gemini computer threatened to exceed the storage capacity of the core memory, IBM proposed an Auxiliary Tape Memory to store software modules that did not need to be in the computer at lift-off. For example, programs that provided backup booster guidance and insertion assistance would be in the core memory for the early part of the flight. The re-entry program could be loaded into the core shortly before it was needed, thus writing over the programs already there. This concept, fairly common in earth-bound computer usage, was a first for aerospace computing.

The tape memory increased the available storage of the Gemini computer by seven and one-half times with its capacity of 1,, bits. Layout of the Gemini Digital Computer core memory. NASA's natural insistence on high reliability in manned spaceflight operations challenged the computer industry of the early s. The method used was to triple record each program on the tape, pass each set of three corresponding bits through a voter circuit, and send the result of the vote to the core memory This scheme was later used on the Shuttle.

space structures and support systems - sicsa

Shortly after a successful rendezvous with an Agena, the combined spacecraft began to spin out of control. Mission Control decided to disengage the Agena and bring the Gemini down, as large amounts of attitude control thruster fuel had been wasted try- ing to regain control of the spacecraft.

Thus, the first attempt to load a program from the tape was made while the spacecraft was spinning. IBM obtained this sort of reliability beyond the original specifica- tions as a result of an extensive testing program. This ensured a successful program load under adverse conditions. Increased cleanliness in manufacturing was one solution to this problem. The only in-flight failure of a computer component was on the 48th revolution of the Gemini IV mission, when astronaut James McDivitt tried to update the computer in preparation for re-entry. Many considered software development an incidental part of the overall applications of computing.

Specialists wrote most of the software, usually in arcane assembly languages. Auxiliary Tape Memory in test. Although its use in technical applica- tions was rapidly spreading, it was still considered too inefficient for use on computers like the Gemini digital computer. Many thought its compiler-produced machine code to be less effective in utilizing machine resources than machine language programs written by humans. This sort of programming was considered to be more of an art than a science. Whereas the design and construction of computer hardware followed conventional engineering principles, software development was largely haphazard, undocumented, and highly idiosyncratic.

Many managers considered software developers to be a different breed and best left alone. This concept of software is a myth, and although it persists in some companies and with some people today, by and large software is now considered as an engineered product, little different from a rocket engine or computer. Although the term "software engineering" did not come into com- mon use until , programmers had applied its basic tenets to both large and small software projects for at least 15 years.

Software en- gineering has evolved as programmers learned which techniques worked, which did not, and what actually occurred in the development of software products. Software engineers recognize that software follows a specific development cycle, from formal specification of the product, through the design and coding of the actual program, and then to testing of the product and postdelivery maintenance.

This cycle lasts for many years in the case of programs such as operating systems, or a short period of time in the case of specialized, single-use programs. During this development process, strict standards of documentation, configuration control, and managing changes and the correction of errors must be maintained. Also, breaking down the application into smaller, poten- tially interchangeable parts, or modules, is a primary technique.

Com- munication between programming teams working on different but in- terconnected modules must be kept clear and unambiguous. It is in these areas that NASA has had the greatest impact on software en- gineering. It was, of course, the first on-board software for a manned spacecraft and was certainly a more sophisticated sys- tem than any that had flown on unmanned spacecraft to that point. When the time came to write the software for Gemini, programmers envisioned a single software load containing all the code for the flight.

Soon it became obvious that certain parts of the program were relatively un- changed from mission to mission, such as the ascent guidance backup. Designers then introduced modularization, with some modules be- coming parts of several software loads.

  • The Soviet Operational Art and Operation Uranus (Japanese Edition).
  • Nuggets From God;
  • ;
  • Full text of "Computers in spaceflight: The NASA experience"?
  • Sir Cyril Black.
  • .

Another reason for modularization is the fact that the programs developed for Gemini quickly exceeded the memory available for them. Some were stored on the Auxiliary Tape Memory until needed. The problem of poor estimation of total memory requirements has plagued each manned spacecraft computer system.

The different versions were referred to by the name "Gemini Math Flow. Math Flow One consisted of just four modules: Ascent, Catch-up, Rendezvous, and Re-entry. This version of the software flew on spacecraft n in January By Math Flow Four, the re-entry initialization program had been successfully added, but the load took up 12, of 12, available words. The plan had been to use this program on spacecraft HI and others, but a NASA directive of February, changed the guidance logic of the re-entry mode to a constant bank angle rather than a proportional bank angle and constant roll rate.

It had six program modules with nine operational modes. The Executor routine selected other routines depending upon mission phases. The use of simulations such as the FORTRAN program was endemic to the Gemini software effort and was later ap- plied to software development for other spacecraft computers. Gemini used three levels of simulations, beginning with the equation-validation system.

The third level was a refined digital simulation to determine the per- formance characteristics of the software, useful in error analysis. This Mission Verification Simulation MVS ensured that the guidance system worked with the operational mission program. Even if the software is perfect, errors may occur because of tran- sient hardware or software failures during operation due to power fluctuations or unforeseen demands on real-time programs. Such routines were put in the Gemini software and are now a part of all IBM computer systems. The software produced during the Gemini program was highly reliable and successful.

NASA was certainly better prepared to monitor software development for the much more difficult Apollo program. The controls consisted of a mode switch, a start button, a malfunction light, a computation light, and a reset switch. The mode switch had seven positions for selection of one of the measurement or computation programs. The start button caused the computer to run the selected program loaded in its memory. The reset switch caused the computer to execute its start-up diagnostics and prepare itself for action. The MDIU consisted of two parts: The first two digits of the register, a simple odometerlike rotary display, were used to in- dicate a memory address.

Up to 99 such logical addresses could be ac- cessed. The remaining five digits displayed data. Errors caused all zeroes to appear. Negative numbers were inserted by making the first digit a nine; the other digits contained the value. The IVI displayed velocity increments required for, or as a result of, a powered maneuver. Manual Data Insertion Unit. On orbit, if no powered maneuvers were imminent, it could be shut down to save electrical power. Due to the nature of core memory, programs and data stored magnetically in the cores would not disappear when the power was off, as in present day semiconductor memories.

This made it possible to load the next set of modules, if necessary, from the Auxiliary Tape Memory, enter any needed parameters, and then shut down the machine until shortly before its next use. It took 20 seconds for the machine to run its start-up diagnostics upon restoration of power.

After the diagnostics ran successfully, the current program load was ready for use, all parameters intact. GT-IV was following such a procedure in preparing for re-entry on June 7, The computer was placed in the RNTY mode, and the crew received and entered updated parameters given to them when they were in contact with the ground stations. Using the computer for catch-up and rendezvous was a relatively simple task.

The difference between catch-up and rendezvous is that catch-up maneuvers are executed to put the spacecraft into position to make an orbit-change maneuver. Crews began the catch-up by entering the ground-calculated rendezvous angle desired into ad- dress The rendezvous angle indicated how much farther along in a degree orbit the rendezvous was to take place. For example, if the crew desired rendezvous one-third orbit ahead, was entered into address 83 using the MDIU. The interval at which the pilot wanted to see updates was then entered in address For example, if was entered, the computer would display on the IVI any re- quired velocity changes at degrees from the rendezvous point the start , 80 degrees to go, and 40 degrees to go.

If the IVI indicated that the. These backup calculations were compared with the ground-calculated solution as well as the computer solution. These examples of the use of the computer on a typical flight demonstrate that it was a relatively straightforward assistant in guidance and navigation. It permitted the Gemini astronauts to be in- dependent of the ground in accomplishing rendezvous from the terminal-phase intercept maneuver to station keeping, a valuable re- hearsal for the lunar orbit rendezvous required for the Apollo program.

The astronauts participated in both the hardware and software design of the computer and its interfaces, and they were able to go to Owego and be put in the man-in-the-loop simulations. By flight time, like everything else in the cockpit, use of the computer was second nature. Bachman of IBM characterized it as die "last of a dying breed. Nonetheless, its designers claim an impressive list of firsts: Development of the Gemini computer helped IBM in significant ways. This series eventually produced the computer used on Sky lab and the AP used in the Shuttle.

Coupled with IBM's involvement in the real-time comput- ing centers used to monitor Mercury and Gemini missions, the com- pany established itself as a major contributor to America's space program as it had been to the military research and development ef- fort. However, even though identification with the space program has been maintained through several high-visibility projects, no significant commercial hardware products resulted as spinoffs. For NASA, Gemini and its on-board computer proved that a reli- able guidance and navigation system could be based on digital com- puters.

It was a valuable test bed for Apollo techniques, especially in rendezvous. However, the Gemini digital computer itself was totally unlike the machines used in Apollo. With its Auxiliary Tape Memory and core memory, the Gemini computer was more like the Skylab and Shuttle general purpose computers.

It is in those systems where its im- pact is most apparent. Navigating from the earth to the moon and the need for a certain amount of spacecraft autonomy dic- tated the use of a computer to assist in solving the navigation, guidance, and flight control problems inherent in such missions. Be- fore President John F. Kennedy publicly committed the United States to a "national goal" of landing a man on the moon, it was necessary to determine the feasibility of guiding a spacecraft to a landing from a quarter of a million miles away.

The availability of a capable com- puter was a key factor in making that determination. The Instrumentation Laboratory of the Massachusetts Institute of Technology MIT had been working on small computers for aerospace use since the late s. That computer could be interfaced with both inertial and optical sensors.

In addition, MIT was gaining practical experience as the prime contrac- tor for the guidance system of the Polaris missile. In early , Robert G. An on-board digital computer was part of the design. The existence of these preliminary studies and the confidence of C.

Stark Draper, then director of the Instrumentation Lab that now bears his name, contributed to NASA's belief that the lunar landing program was possible from the guidance standpoint. The presence of a computer in the Apollo spacecraft was justified for several reasons. Three were given early in the program: Yet none of these became a primary justification. Rather, it was the reality of physics expressed in the 1. These considera- tions and the consensus among MIT people that autonomy was desirable ensured the place of a computer in the Apollo vehicle.

Planners even decided to calculate the lunar orbit insertion bum on the ground and then transmit the solution to the spacecraft computer, which somewhat negated one of the reasons for having it. Not only did it have naviga- tion functions, but also system management functions governing the guidance and navigation components. The Apollo computer system did not have as long a list of responsibilities as later spacecraft computers, but it still handled a large number of tasks and was the ob- ject of constant attention from the crew. Despite their experience with aerospace computers, the Apollo project turned out to be a genuine challenge for them.

One of the MIT people later recalled that If the designers had known then [] what they leamed later, or had a complete set of specifications been available Fortunately, the technology improved, and the concepts of computer science applied to the problem also advanced as MIT developed the system. Managing such a project was completely outside the NASA experience. A short time after making the Apollo guidance contract, NASA became involved in developing the on-board software for Gemini a much smaller and more control- lable enterprise and the software for the Integrated Mission Control Center.

Different teams that started within the Space Task Group, later as part of the Manned Spacecraft Center in Houston, managed these projects with little interaction until the mids, when the two Gemini systems approached successful completion and serious problems remained with the Apollo software. Designers borrowed some concepts to assist the Apollo project.

They were to learn together the principles of software engineering as applied to real-time problems. For a number of reasons, planners rejected the direct flight method of launching from the earth, flying straight to the moon, and landing direcdy on the surface. Be- sides the need for an extremely large booster, it would require flaw- less guidance to land in the selected spot on a moving target a quarter of a million miles away.

A spacecraft with a separate lander would segment the guidance problem into manageable portions. First, the en- tire translunar spacecraft would be placed in earth orbit for a revolu- tion or two to properly prepare to enter an intercept orbit with the moon. Upon arriving near the moon, the spacecraft would enter a lunar orbit. It was easier to target a lunar orbit window than a point on the surface. The lander would then detach and descend to the surface, needing only to guide itself for a relatively short time.

After comple- tion of the lunar exploration, a part of the lander would return to the spacecraft still in orbit and transfer crew and surface samples, after which the command module CM would leave for earth. With a lunar orbit rendezvous mission, more than one computer would be required, since both the CM and the lunar excursion module LEM needed on-board computers for the guidance and navigation funcdon.

The CM's computer would handle the translunar and tran- searth navigation and the LEM's would provide for autonomous land- ing, ascent, and rendezvous guidance. Ground systems backed up the CM computer and its associated guidance system so that if the CM system failed, the spacecraft could be guided manually based on data transmitted from the ground.

If contact with the ground were lost, the CM system had autonomous return capability.

Since the lunar landing did not allow the ground to act as an effective backup, the LEM had the AGS to provide backup ascent and rendezvous guidance. It would not be capable of providing landing assistance except to monitor the performance of the PGNCS. Old Technology versus New: There always seem to be enough deficiencies in a final product that the designers wish they had a second chance.

In some ways the Apollo guidance computer was a second chance for the MIT team since most worked on the Polaris computer. That was MIT's most ambitious at- tempt at an "embedded computer system," a computer that is intrinsic to a larger component, such as a guidance system. Although the Apollo computer started out to be quite similar to Polaris, it evolved into something very different.

The Apollo guidance computer had two flight versions: Block I and Block n. Block I was basically the same technology as the Polaris system. Block n incorporated new technol- ogy within the original architecture. NASA's challenges to the MIT contract and the decision to use the rendezvous method instead of a direct ascent to the moon were decisive. A third factor related to reliability. Finally, the benefits of the new technology influenced the decision to make Block II.

Free Shipping. Buy America's Space Shuttle: Reaction Control System NASA Astronaut Training Manual (RCS A) - eBook at www.farmersmarketmusic.com America's Space Shuttle: Reaction Control System NASA Astronaut Training Manual (RCS A) - Kindle edition by World Spaceflight News, National.

In June , Harry J. Chilton challenged Goett's idea, showing that the expected savings would not materialize. Two years later, when the deficiencies of the Polaris-based system were obvious and the solutions offered by the new technology of the Block n version still unproved, David W. The early Apollo development flights were to use the CM only. Later flights would include the LEM. Reliability was another force behind Block II. During early plan- ning for the guidance system, redundancy was considered a solution to the basic reliability problem.

Designers thought that two computers would be needed to provide the necessary backup; however, they dropped this scheme for two reasons. Moreover, none of the variations of the two-computer or other redundancy schemes could meet the power, weight, and size requirements. The Block I design, due to its modularity, could be fixed during a mission that carried appropriate spares. The most important reason for going to Block 11 was the availability of new technology.

The Block I design used core transis- tor logic. It had several disadvantages: These disadvantages led MIT to begin studying, as early as , the possible use of integrated circuits ICs to replace core transistor circuits. ICs, so ubiquitous today, were only 3 years old then and thus had little reliability history. It was therefore difficult to consider their use in a manned spacecraft without convincing NASA that the ad- vantages far outweighed the risks. It took nearly 5, of these simple circuits to build an Apollo computer. Also, the time it took the machine to cycle became fixed at At the time, the production of such circuits was low, and they were more expensive than building core transistor circuits.

It would be hopelessly outdated tech- nologically by the time of 5ie lunar landing 7 years later, but in , using the new microcircuits seemed to be a risk. The CM housed the computer in a lower equipment bay, near the navigator's station. Block 11 measured 24 by The machine in the lunar module was identical. Crew members could communicate with either computer using display and keyboard units DSKY, pronounced "disky".

Two DSKYs were in the CM, one on the main control panel and one near the optical instruments at the navigator's station. In addition, a "mark" button was at the navigator's station to signal the computer when a star fix was being taken. A single DSKY was in the lunar module. The DSKYs were 8 by 8 by 7 inches and weighed As well as the DSKYs, the computer directly hooked to the inertial measurement unit and, in the CM, to the optical units.

The choice of a bit word size was a careful one. Many scien- tific computers of the time used bit or longer word lengths and, in general, the longer the word the better the precision of the calcula- tions. MIT considered the following factors in deciding the word length: Advantages of a shorter word are simpler circuits and higher speeds, and greater preci- sion could be obtained by using multiple words. A single precision word of data consisted of 14 bits, with the other 2 bits as a sign bit with a one indicating negative and a parity bit odd parity.

An instruction word used bits they were numbered descending left to right as an octal operation code. The ad- dress used bits The Apollo computer had a simple packaging system. The com- puter circuits were in two trays consisting of 24 modules. Each module had two groups of 60 flat packs with pin connectors.

The memory in Block 11 consisted of a segment of erasable core and six modules of core rope fixed memory. Both types are dis- cussed fully below. The use of bank registers enabled all of the machine's memory to be addressed. The largest number that can be contained in 12 bits is 8, The fixed memory of the Apollo computer contained over four times that many locations. Therefore, the memory divided into "banks" of core, and the addressing could be handled by first indicat- ing which bank and then the address within the bank. For example, taking the metaphor "address" literally, there are probably hundreds of " Main Street" addresses in any state, but by putting the ap- propriate city on an envelope, a letter can be delivered to the intended Main Street without difficulty.

The computer banks were like the cities of the analogy. This scheme made it possible to handle the addressing using a bit word, but it placed a greater bur- den on the programmers, who, in an environment short of adequate tools, had to attend to setting various bit codes in the instructions to indicate the use of the erasable bank, fixed bank, or super bank bit. Although this simplified the hardware, it increased the complexity of the software, an indication that the importance of the software was not fully recognized by the designers. Memory The story of memory in the Apollo computer is a story of increas- ing size as mission requirements developed.

In designing or purchas- ing a computer system for a specific application, the requirements for memory are among the most difficult to estimate. NASA and its com- puter contractors have been consistently unable to make adequate judgments in this area. Apollo's computer had both permanent and erasable memory, which grew rapidly over initial projections.

Apollo's computer used erasable memory cells to store inter- mediate results of calculations, data such as the location of the spacecraft, or as registers for logic operations. In Apollo, they also contained the data and routines needed to ready the computer for use when it was first turned on. Fixed memory contained programs that did not need to be changed during the course of a mission.

Fixed memory leapt to 24K and then finally to 36K words, and erasable memory had a final configuration of 2K words. Part of the software difficulties stemmed from functions and features that had to be dropped because of program size considerations, and part because of the already described addressing difficulties. If the original desig- ners had known that so much memory would be needed, they might not have chosen the short word size, as a bit word could easily directly address a 36K bank, with enough room for a healthy list of in- struction codes.

One reason the designers underestimated the memory require- ments was that NASA did not provide them with detailed specifica- tions as to the function of the computer. NASA had established a need for the machine and had determined its general tasks, and MIT received a contract based on only a short, very general requirements statement in the request for bid. The requirements started changing immediately and continued to change throughout the program.

Software was not considered a driving factor in the hardware design, and the hardware requirements were, at any rate, insufficient. The actual composition of the memory was fairly standard in its erasable component but somewhat unique in its fixed component. The erasable memory consisted of coincident-current ferrite cores similar to those on the Gemini computer, and the fixed memory consisted of core rope, a high-density read-only memory using cores of similar material composition as the erasable memory but of completely dif- ferent design.

A Unique Data Storage Device Each core in an erasable memory could store one bit of information, and each core in the core rope fixed memory could store four words of information. In the erasable memory, cores are magnetized either clock- wise or counterclockwise, thus indicating the storage of either a one or a zero. In fixed memory, each core functions as a miniature transformer, and up to 64 wires four sets of bit words could be connected to each core.

If a wire passed through a particular core, a one would be read. If a particular wire bypassed the core, a zero would be read. For example, to store the data word 1 1 1 in a core, the first, fourth, eighth, and thirteenth through sixteenth wires would pass through that core, the rest would bypass it.

The modules further divided into "banks" of 1, words. The first two banks were called the "fixed-fixed memory" and could be directly addressed by 12 bits in an instruction word. The use of core rope constrained NASA's software developers. Software to be stored on core rope had to be delivered months before a scheduled mission so that the rope could be properly manufactured and tested.

Once manufactured, it could not be altered easily since each sealed module required rewiring to change bits. The software not only had to be finished long in advance, but it had to be perfect. Even though common sense indicates that it is advantageous to complete something as complex and important as software long be- fore a mission so that it can be used in simulators and tested in various other ways, software is rarely either on time or perfect.

Fortunately for the Apollo program, the nature of core rope put a substantial amount of pressure on MIT's programmers to do it right the first time. Unfor- tunately, the concept of "bug"-free software was alien to most programmers of that era. Programming was a fully iterative process of removing errors. Even so, many "bugs" would carry over into a delivered product due to unsophisticated testing techniques. This diagram shows the principle behind core rope. Suppose that the data shown above the cores in the drawing is to be stored in the specific core.

Thus is stored in the first core on the left by attaching the top wire from the select circuit to the core and bypassing it with the next three wires. When that core is selected for reading, the wire attached to the core wUl indicate a "one" because all cores in a rope are permanently charged as ones; the wires bypassing the core will indicate zeroes.

The company buiU a device to do this Production Problems and Testing Development and production of the Apollo guidance, navigation, and control system reflected the overall speed of the Apollo program. Less than 3 years after that, designers achieved the final program objective. This represents a considerable production for a special-purpose com- puter of the type used in Apollo. The need to quickly build high- quality, high-reliability computers taxed the abilities of Raytheon. The Polaris machine was much simpler.

Skylab 3: The Middle Mission 1973 NASA; Owen Garriot, Jack Lousma, Alan Bean

Rapid growth, underestimation of production requirements, and reliability problems dogged Raytheon throughout the program. Changes in design made by MIT in late caused the company its initial trouble. The original request for proposal had featured Polaris techniques, so Raytheon bid low, expecting to use the same tools and production line for the Apollo machine. These failures turned out to be largely caused by contaminated flat packs and DSKY relays. The Block II computers would not work at first due to excessive signal propagation time in the micrologic interconnection matrix.

This sort of problem is usually the result of speeding up development to the point at which changes are not adequately documented. Continuous and careful attention to reliability led to the discovery of problems. Post- production hardware tests included vibration, shock, acceleration, temperature, vacuum, humidity, salt fog, and electronic noise. NASA acquired considerable experience in managing a large, real-time software project that would directly in- fluence die development of the Shuttle on-board software. Software engineering as a specific branch of computer science emerged as a result of experiences with large-size military, civilian, and spaceborne systems.

In the Apollo program, as well as other space programs with mul- tiple missions, system software and some subordinate computer programs are only written once, with some modifications to help in- tegrate new software. However, each mission generates new opera- tional requirements for software, necessitating a design that allows for change.

Since , when designers first used the term software en- gineering, consciousness of a software life cycle that includes an ex- tended operational maintenance period has been an integral part of proper software development. Even during the early s, the cycle of requirements definition, design, coding, testing, and maintenance was followed, if not fully ap- preciated, by software developers. The important dif- ference from present practice was the report's recommendation that modules of code be limited to to lines, about five times larger than current suggestions.

The main point of the report and the thrust of software engineering was that software can be treated the same way as hardware, and the same engineering principles can apply. However, NASA was more used to hardware development than to large-scale software and, thus, initially failed adequately to control the software development.

This was so even though MIT manager A. Combined with NASA's inexperience, MIT's non- engineering approach to software caused serious development problems that were overcome only with great effort and expense. Managing the Apollo Software Development Cycle One purpose of defining the stages in die software development cycle and of providing documentation at each step is to help control the production of software.

Programmers have been known to in- advertently modify a design while trying to overcome a particular coding difficulty, thus making it impossible to fulfill the specification. Eliminating communication problems and preventing variations from the designed solution are among the goals of software engineering. One method was a set of standing committees; the other was the accep- tance cycle.

Three boards contributed directly to the control of the Apollo software and hardware development. The Apollo Spacecraft Con- figuration Control Board monitored and evaluated changes requested in the design and construction of the spacecraft itself, including the guidance and control system, of which the computer was a part.

Slayton, inspected items that would affect the design of the user interfaces. This report is offered in a PowerPoint format with the dedicated intent to be useful for academic, corporate and professional organizations who wish to present it in group forums. The document is the first in a series of seminar lectures that SICSA has prepared as information material for its own academic applications.

We hope that these materials will also be valuable for others who share our goals to advance space exploration and development. This report draws extensively from his work. This excellent publication is used as a primary text for the SICSA MS-Space Architecture curriculum, and is highly recommended as a valuable reference document for students and professionals at all career stages. The document was edited by Wiley J.

Text materials were contributed by 67 professional engineers, managers and educators from industry, government and academia. It can be obtained through the publisher: Orbit Book Company, Inc. A-2 - Special and Extreme Environments………… A-3 - Space Habitat Applications…………………….. A-8 - Operational Considerations……………………. A-9 - Structural Loads…………… B-4 - Pressurized Habitat forms………………… B-6 - The Natural Radiation Environment…………… B-9 - Lunar Soil as a Radiation Barrier………….

B - Space Debris Hazards…………………. B - Space Debris Protection……………………… B - Reducing Atomic Oxygen Effects……………… B - Types of Pressure Structures…………………….. B - Primary Habitat Module Structures………………. B - Russian Airlock Demonstration… B - Early Inflatable Development………………… B - Lawrence Livermore Studies…………… B - Bigelow Module Development……………………. B - Bigelow Module Testing……… B - Special Viewing Devices…………………. B - Skylab Windows…………………………… B - Skylab Wardroom Window………………….. B - Shuttle Orbiter Windows……………………..

B - Gemini Side Window…………… B - Flight 2A……………………………………… B - Flight 1R……………………………………….. B - Flight 4A ………………………………. B - Flight 5A ………………………………. B - Possible Applications……………………………….. B - Lunar Construction Materials — Comparisons…….. C-3 - Photovoltaic Power Systems………………….. C-6 - Nuclear Power Conversion……………………. C - Insulation Materials……………………….

C - Localized Electric Heaters……………………….. C - Heat Pipes………………………………….. C - Partially Closed Life Support……………. C - Water Recovery and Management………. C - Radio Frequencies and Interference…… C - Antennas and Transmitters………………………. C - Atmosphere Gas Limits…………………………. C - Supply and Control Processes……………… C - Ventilation, Temperature and Humidity Control. C - Ultrahigh Frequency Subsystems……………….. C - Ku-band Subsystem……………………………….

C - Video Distribution Subsystem……………………. D-8 - Attitude Determination……………………………. D-9 - Navigation Reference Frames………………… D - Shuttle Reaction Control System……………….. D - Russian Propulsion System………………. D - US Propulsion Module……………………. E-4 - Canadian Space Agency Development……….. Construction Requirements and Methods in Space Both documents are available on our website: Quasi-static launch thrust and staging loads: Structural dynamic loads from engine vibration transmitted through vehicle: Shuttle Ariane Shuttle average Ariane average Lateral 5.

Flexation of long structures and appendages during orbital operations presents troublesome some flight control problems. Space Station in Orbit Reentry Capsule External walls and surfaces experience extreme and abrupt temperature changes when transitioning from sunlight to shade and during deorbit reentries into an atmosphere. Trapped protons and electrons create hazards within the van Allen Belts, beyond which cosmic radiation and large solar flares present risks to people and equipment.

Shielding Depth cm AI 0. Radiation Doses from a Very Large Solar Storm The amount of energy deposited in a material depends upon the type of radiation and material. Grays GY are associated with potential biological damage. Electrons moving near the speed of light penetrate farther than protons, and produce secondary radiation xrays that can be more damaging than primary radiation that caused it.

It can also erode the surface materials: The Skin Stringer structure design is the most rigid to resist axial and bending loads, but adds mass. They must be sized to accommodate suits and equipment for all EVA applications. The inflatable airlock functioned well but the hatch was too small. Meriodonally-wound Dacron filaments with a Butyl rubber binder and internal bladder of Butyl-impregnated nylon for gas retention packaged in an 8 foot diameter hub for launch with deployed volume 2, cubic feet.

Designed for 5 psi pressure. Internal volumes of shelter and airlock were cubic and cubic feet, respectively. The pound structure was designed for 5 psi pressure. Debris shields constructed of kevlar 29 covered the surface. Total weight pounds. Multilayered expandable material consisting of a composite bladder; filament-wound 3. The unit weighed Longitudinal wraps of Dacron 52 yarn tape were looped around aluminum circumferential rings spaced along the pressure hull to ensure uniform folding.

The entire structure could be packaged in a The flex section for crew transfer between the Orbiter crew cabin and Spacelab module used unidirectional fabric plies wrapped around rings of steel wire to minimize interface section loads resulting from axial, lateral, torsional and rotational displacements caused by installation, thermal gradients and maneuvering. Fillets added to outer diameters of the wire rings ensured a smooth transition and avoided fabric abrasion. Mosites Flourel Interlayer Adhesive Used 0.

ILC Dover,, developer of EVA suits, also created an inflatable hyperbaric chamber for treatment of flying personnel who experience the bends during the s. Livermore Habitat Module A redundant pressure containment system would be redundant:: Stowed and Deployed Habitat The Expandable habitat was envisioned to be a 2. The structural layer was slightly oversized to create a quilted effect to reduce pressure loads transmitted through the fabric.

TransHab was designed to be packaged around a central utility core for launch. It would expand to a 7. The winner will also be guaranteed 1st right on an ongoing service contract. Module Inflation The 7 layer module wall will be pressurized at 10psi compared with Metal Module Outfitting Simulators A , sq. One proposed design deploys interior floors automatically: An internal metal structure would be assembled following envelope pressurization.

An erectable internal structure would be assembled from aluminum truss sections along with floor panels, modular utility systems and attached equipment that are delivered separately. Diameter hybrid concept was proposed by SICSA to support hydroponic plant growth and aquatic experiments for food production which would require substantial volumes: Soft pressurized connecting tunnels between modules adjust for imprecise alignments under irregular surface conditions. Example of window attachments with a Skin Stringer waffle pattern pressure shell structure.

The 3 layers of material separated by atmospheric space results in a rather thick viewing aperture which has been reported to be like looking through a tunnel. Then, capture hooks mate them, and the capture latch releases. Trusses offer special infrastructure advantages for such applications: Trusses provide a light weight, strong and versatile structural approach. The Power Tower featured a long box truss backbone structure that could accept a variety of functional attachments, including modules, storage facilities and solar arrays.

The Delta configuration was devised to provide stiffness to avoid dynamic controllability problems associated with the long, flexible Power Tower truss. Since it did not track the Sun, its efficiency was poor for large beta angles when the Sun was far outside the orbit plane. The design which was conceptualized at the NASA Johnson Space Center in the early s was determined not to provide adequate power generation efficiency.

The system deployed automatically when a restraint cable was cut by a thermal knife.

Reward Yourself

The active side engages and captures a bar structure attached to the passive side. This half contains a central capture latch and 4 alignment guides. Payloads placed into the CAS are equipped with a capture bar and guide pins for alignment. Zvezda Service Module Launch Date: July 12, Launch Vehicle: Expedition One Crew Launch Date: Russian Soyuz - Continued stowage and checkout of the new station.