Electronic Military & Defense Annual Resource

6th Edition

Electronic Military & Defense magazine was developed for engineers, program managers, project managers, and those involved in the design and development of electronic and electro-optic systems for military, defense, and aerospace applications.

Issue link: https://electronicsmilitarydefense.epubxp.com/i/707574

Contents of this Issue

Navigation

Page 33 of 43

Trends Pathfinding's Evolving Role In Military And Defense Technology A pathfinding methodology can open up design options that otherwise may not have been considered due to time and manpower restraints. By Bill Martin B efore we delve deeper, it is best to establish the definition of pathfinding (PF) as it is interpreted in this article: PF is a methodology to help explore alternative implementations to optimize mechanical, thermal, and electrical solutions. PF can be applied to any type of product — integrated circuit (IC), printed circuit board, aircraft, rocket, bridge, ship, etc. PF's key benefit is identification of the best solution from many possibilities. In many cases, PF is limited by the tools' algorithms, along with required resources using suboptimal PF tools. These limitations may not allow an optimum solution. A successful PF tool minimizes resource use and time needed to create structures while maintaining high accuracy with short simulation times, accelerating every aspect of PF. A secondary benefit of PF methodology is that, when an optimum solution is chosen, the PF analysis can shift to a process's tolerances to determine or improve the solution's robustness. Some might label this systematic method design of experiments (DoE). Pathfinding's Evolving Role Engineers strive to solve difficult problems with currently available tools and methodologies, but when a design does not work as intended, the engineers must react quickly to determine root cause(s) to prevent similar failures from recurring. Some tragic examples of such failures include the R.M.S. Titanic (1912), the Tacoma Narrows Bridge (1940), and the Space Shuttle Challenger (1986). Examination of these disasters' root causes determined design and process flaws that required changes to methodologies, tools, models, and assumptions. Each of these items was designed and verified by teams of engineers, but all were based on assumptions that were too narrow and/or not fully analyzed: • The Titanic was built with "isolated" chambers, supposedly incapable of flooding, except when multiple chambers were concurrently breached. The Titanic's listing allowed water to flow over the chamber walls, causing a domino effect of successive flooded chambers. Had the Titanic experienced a single, isolated chamber failure, this event never would have happened. • The Tacoma Bridge design failed to account for wind shear and harmonic resonance with winds above 40 mph, which were not unusual in the bridge's location. • The Challenger shuttle had many components that had not been evaluated below 32˚F, while the launch was conducted at 30˚F. One such component was the O-ring — the weak link that breached and caused a massive fireball explosion. Had PF with tolerances been performed (simulated at lower temperatures or evaluated in a lab), designers could have uncovered the O-ring cold temperature issue. In retrospect, many product failures could have been avoided if modifications had been made to the methodology and the tools used. These modifications have insignificant costs compared to the cost of these failures (lives lost, company reputation, costs to resolve problems, product delays, etc.). Product Development Changes Over the past century, commercial and military products are increasingly silicon and software-centric due to lower cost, weight, and power consumption, as well as increasing requirements for processing and communication capability. When designing ICs (ICs, SoCs, ASICs, etc.), there are many technical and business criteria that must be met before a product can be successful. For decades, linear silicon scaling allowed developers to cram as much functionality as could be achieved into a cost-effective die size (~400 x 400 mils). If a new (smaller/ denser process) node was available and had the required silicon IP, it typically was a "two-second" decision to use the latest product, rather than an older, larger process node. Using the denser process would automatically shrink the die, thereby reducing cost while improving performance and reducing power. The vital criteria were quickly reduced to a critical few: 1. What is my final die size that dictates cost per unit? 2. Can I achieve the performance and power requirements for my design? 3. Can my silicon supplier meet my supply requirements? Electronic Military & Defense Annual Resource, 6th Edition 34

Articles in this issue

Archives of this issue

view archives of Electronic Military & Defense Annual Resource - 6th Edition