John: Hello and welcome to the Open Systems Media e-cast titled: “Managing SWAP and ISR Systems.” My name is John McCale, Group Editorial Director with Military Embedded Systems Magazine and moderator for today’s event. Our speakers today are: Brian Muzyka, Sales Manager for Defense and Aerospace at Advanced Cooling Technologies, Aaron Frank, Senior Product Manager for Intel Single Board Computers, at Curtiss-Wright Defense Solutions, Michael Stern, Global Product Manager for MIL/AERO Products at GE Intelligent Platforms and Greg Powers, Market Development Manager for Aerospace, Defense and Marine at TE Connectivity.
Now, before we get started, I have some short housekeeping announcements. The e-cast will consume about 35 to 40 minutes, leaving the remaining time for the question and answer session. Before we get going, I want to point out three important buttons on your screen as well. First, please take note of the ‘Enlarge Slides’ button. By clicking on this button, you will be able to view the slides in full-screen mode. Second important button is the ‘Forward to a Friend’ button, which enables you to send an email request for this event. The third button on your console window allows you to enter questions in real time. We will do our best to address these questions during the closing Q&A session. If you have a question intended for a specific speaker, please note so at the beginning of your question. Otherwise, I will hand out the questions equally to our speakers. If you have a question pertaining to the e-cast operation itself, one of our technicians will respond back to you during the e-cast.
Please note that, as much as we’d like to, we may not get to all of your questions today. In that case, someone may get back to you after the e-cast with more information. The e-cast will be archived online and will be available for one year. There will also be an MP3 version of the event. Now, I am pleased to turn it over to our first speaker, Brian Muzyka. Brian?
Brian: Thank you, John, and thanks everyone for joining us today. As he mentioned, I am the Sales Manager of Advanced Cooling Technologies, and I will be kicking off our discussion today to talk about how to improve your size, weight and power of your SWAP configuration for your thermal management solutions. So, most applications require some form of conduction. It is rare that there is enough available space for a heat sink in close proximity to all of your critical heat generating components and, therefore, heat spreading is one of the most important configurations in your system.
In most embedded computing, optical devices and electronic cooling systems, it’s the primary thermal management concern. Heat spreading, looking at some of the performance dictators, is dependent on the thermal conductivity of your heat spreader, the cross sectional area, as well as the length that you need to move the heat.
So, to enhance SWAP, thermal performance to be maximized and your size and weight to be minimized, what we are doing in this slide is investigating two-phase heat transfer solutions compared to bulk material conduction solutions. If you look at the graph in the top right, you can see that the two-phase heat transfer devices, such as heat pipes, HiK plates and vapor chambers, have much higher thermal conductivities than the bulk metal solutions, such as copper and aluminum. For instance, a heat pipe can move heat with an equivalent thermal conductivity of 10,000 watts per meter K, compared to aluminum, which is 167 or copper, which is 380 watts per meter K. The range we show for a heat pipe, 10,000 to 200,000, is mostly a function of its length because the heat pipe feeds the temperature difference across the length, it is not largely reliant on the overall length. So, the longer the heat pipe, the longer that effect of that conductivity.
For most electronic cooling applications, heat pipes are around six to ten inches in length and are somewhere near the lower end, the 10,000 watts per meter K value in that scale. Heat pipes do move heat in a single direction. So, it’s moving from a point type spot cooling type application.
If you need multi-directional heat spreading, you may look at a HiK plate or vapor chamber. These types of solutions have the highest thermal performance left in the chart. For instance, HiK plates have over 500 watts per meter K and as high as 1,200 watts per meter K in bulk thermal conductivity.
Also shown in the table to the right is considerations for weight. This table normalizes the density to that of aluminum. So, if you look at a HiK plate, this has kind of the highest pound for pound multi-directional thermal spreading capability. So, the density here is very comparable to aluminum, but the K value is much higher. A HiK plate has embedded heat pipes which are held in place with solder, which is only slightly more dense than the aluminum base. So the density range is pretty much a function of how many heat pipes are used within the system.
So now that I have kind of gave you an overview of the SWAP considerations, now we will dive a little bit more on how these type of solutions work. So, on this slide, we’re talking about heat pipes and how they passively transfer heat. So, in a heat pipe, there are no moving parts and there’s no input power required. They are basically an evacuated tube which is comprised of an internal wick structure and a working fluid. Overall, heat pipes can move anywhere from 10s to 100s of watts and that is largely dependent on the geometry and gravitational orientation as you are aligned in your system.
The operation of a heat pipe is completely passive, which is one of their main benefits. Looking at the figure to the right, at the heat source location, the working fluid vaporizes and forms an internal pressure gradient. So, the vapor inside moves to the cooler region. At the cooler region, the fluid gives off its latent heat and condenses back to a liquid. This liquid is then passively pumped back to the heat source and, by capillary force, provided by the internal wick structure.
As I touched on a little bit before, HiK plates are heat pipes embedded into aluminum or another base material. Here, we are showing a cross section in the top right which points out how a heat pipe is implemented into the plate. Basically, what we do is press heat pipes into the plate and solder over top, and then we do a secondary machining, which allows the heat pipe to be extremely close to the surface or to your component, to capture and transfer that heat into the heat pipe and then use the two phase heat transfer to spread the heat out very efficiently.
This process also allows for a very smooth mounting surface to mount your electronics and other critical components, so you don’t have to worry about different surface profiles and overall, you can end up with a very thin, workable frame, which is as thin as 72/1000ths and it can provide thermal conductivities ranging from 500 to 1,200 watts per meter K, as I mentioned previously. I just wanted to touch on those results. Those are real world test data results, where we measured our delta Ts with an actual test application delivered to a customer, and then we go back into our models and put a bulk thermal conductivity in and basically increase that bulk thermal conductivity until we actually match the hot spots in our test fixture. It gives you a real world value that you can go into your models and put somewhere in that range and feel confident that we can meet that performance.
These plates are also ruggedized for harsh environmental conditions. Most platings that you can add to an aluminum plate can also be applied to a HiK plate. So, it is routine that HiK plates pass thermal cycling, shock, vibration and various other environmental concerns. A lot of times, companies will ask us about the CT mismatch between the heat pipe, the solder and the base material but we’ve demonstrated thermal cycling for these plates, hundreds of thermal cycles, and basically what happens is the CT of the base plate dictates the overall thermal expansion coefficient and the heat pipe and the solder kind of just go along with the ride. So it has been demonstrated, thermal cycling in numerous thermal cycles.
So, one of the areas where we utilize the HiK plates for enhanced heat transfer is at the board level of electronic systems. By using a HiK Card frame, for instance, you can get several SWAP benefits that are not achievable with a simple metal spreader. You, first of all, get the higher heat transfer rate associated with the HiK plate but you also get the design of the heat pipe to route them along your edge and iso-thermalize the card edge. This allows you to lower your thermal resistance at this interface. So as you transfer heat into your chassis or into your next level assembly, you don’t have a large heat-flux at the interface, which could drive your temperature difference.
The heat pipes also do most of the thermal work. So you can thin out your frames as much as possible or add weight reduction pockets to further your weight enhancements. For air-cool systems, HiK plates can play a large role in reducing your conduction gradients under the fin stack. This can create higher fin efficiencies and, oftentimes, lower the required fin volume. So if you look at the bottom right figure, you can imagine a heat pipe directly under those fin stacks and, if there was a hot spot in the middle, you can spread that out evenly across the entire geometry and dissipate a lot more heat with a lower temperature gradient.
So now what we are going to do is look at an application that is kind of prevalent in the embedded computing world. In this case, the customer had heat loads that increased dramatically from one design to the next and they didn’t have the schedule to do full redesigns of their system. So, instead, they looked at a quick retrofit type design. Basically, what we were showing to right there is the boundary conditions of the system. It was cooled along the base with a liquid cold plate and basically what they needed to do is get the heat out of the card guides and bring it down to that base as efficiently as possible. The solution that they ended up going with was going with a HiK bolt on plate. So, we designed the heat pipes to move heat directly from those card slots to the base, basically just straight heat pipes down, and the plate was just bolted on externally to the chassis, allowed for no redesign of their legacy parts, and was very efficient at moving the heat down to that base.
So, in this final slide we are showing the results of that system. If you remember the boundary conditions, basically what you have to do is move the heat to the base and the heat pipes were embedded to efficiently do that. If you look on the left hand side of the picture, the aluminum spreader, we basically mocked up a similar size aluminum plate, bolted it on, and ran some thermal models and determined that they would have around an 82 degree temperature at their hotspots electronics. When we did a similar analysis of the HiK plate, we were getting down to about 32 degrees. So, you can see that there’s about 61% reduction in overall temperature gradient and it gave them a lot of flexibility in their design to add more heat into their system and add more capabilities or they could do various other weight reductions in their system to improve overall SWAP.
That wraps it up for the thermal portion. I wanted to hopefully give you guys some valuable tools for the thermal mechanical engineers out there that deal with these heat transfer problems. Hopefully, you got some ideas on how to improve thermal performance as well as reduce size and weight of your system and, with that, I’ll pass it back to John. Thank you.
John: Thank you, Brian. Excellent presentation. I would just like to remind the audience to send your questions in. If you have any for Brian, shoot them through now. We will get to them after our fourth speaker, during the Q&A session. Now, for our next speaker, Aaron Frank, Curtiss-Wright. Aaron.
Aaron: Thank you, John. Good afternoon, everybody. This is Aaron Frank with Curtiss-Wright. I would like to share with you today what we are seeing in the ever present trend to reduce size, weight and power, specifically in the area of processing and, specifically, ISR processing. I would like to tie all of these trends into the growing capabilities that we are seeing with Intel processors.
As we are all aware, what we have seen in the traditional ISR application space has been kind of complex, very specially built systems, which were typically very large, very high powered, very hot systems. Initially, the systems were all ground based and, over time, they moved into the air and into a variety of different platforms. Today, these same systems need to be deployed on smaller and smaller platforms, which severely hindered our ability to meet the size, weight and power targets with traditional ISR architecture.
As you can see in the pictures of this slide, some deployment examples, where the most SWAP critical ones are shown in the right side, where we have small UAVs, some of them even have hand-held, which can be used for various levels of ISR applications.
The other trend that we are seeing, which I am sure is not a surprise to most listeners, is that the typical requirements of ISR systems is are not diminishing as the systems become smaller. In fact, they continue to become more and more complex. With imaging systems, they seem to have a much higher resolutions these days and they require a higher level of processing then we have in the past, with the same or higher throughputs to ingest data, analyze data, store and relay information. The same is true with other ISR type applications and, of course, we are all aware of the ever shrinking defense budgets and things like sequestration. So, we do need to find ways to cost-effectively reuse developed intellectual property and to leverage these existing systems and architectures. There is also a trend toward rapid development and deployment and of feeding common technology into different platforms, be that multiple airborne systems or a mix of airborne and ground based systems and, of course, SWAP is the key for all of these to meet the demands.
So, I would like to take a look, briefly, at what we’d call yesterday’s systems, which are really multiprocessor or many processor systems. These are systems that contain many different processing boards, each with one or more single core processors. In the past, the workhorse of many of these systems were power architecture processors with Ultavec [PH] processor engines, effective processing engines. There’s also other custom DSP based and FPGA based systems, and today, we are seeing a move toward using a common computing platform to do a lot of these same functions.
In these many processor systems, we also see a variety of many different fabrics used to tie them together, including Ethernet, zero rapidIO, the amebas, PSPI, starfabric and others. On the software side, a variety of different software environments were used, including DXworks and Linux operating systems and a whole range of different APIs to use in terms of application software development and also using custom libraries tailored to the specific hardware. Of course, the more boards and hardware that are required to get the job done, the more power is required and the more heat generated, and more weight and so on.
So, let’s flash forward to today, and we can look at the latest generation of processors from most vendors, most notably Intel, Freescale and AMD. These vendors have integrated multiple processing cores into a single piece of silicon. Although the sweet spot seems to be two-core and four-core processors, which are driven by today’s desktop and laptop computing world, there are also processors with more core, such as the eight-core hyper threading Xenon processors or Freescale, which has the 12-core dual thread or 24 virtual core processor that can be used on a single piece of silicon.
What we’re also seeing is that the core to core interconnect fabric is typically a high speed bus with multiple levels of processor cache to ensure that the processors are not starved when they share common memory interfaces. Of course, with multiple memory channels, the processor can take massive amounts of data and feed the processing cores.
The other thing that we are seeing is that the processing cores have specialized accelerators that can be used for mass intensive processes. So, for example, the Power Architecture series has brought back the Altivex processing engine, whereas the Intel core I7 family has vector processing engines known as AVX RAY X2 processors instructions. They also have onboard graphics accelerators, which can also be used as general purpose GPU processors, offering in excess of 350 gigaflops of floating point performance and an upwards of 20 or more GPU processing engines, again in a single piece of silicon. Common to all of today’s processes is PCI express for connectivity to the outside world.
I’d like to talk for just a moment about Intel processor technology cadence and how it relates to managing risk and the adoption of new technologies. Intel, as everybody knows, is the defacto standard for the desktop world, and has many years adopted what they call a tick-tock model. What that means in the semi-conductor world is that there are two primary high risk activities. One is a changing of process technologies, which is a high risk activity, as new processes bring in new tools, new models and new unknown challenges. On the other hand, changing the internal architecture of a chip is also highly risky, as the new design techniques and algorithms are developed and validated. So Intel’s tick-tock cadence really permits only one of these things to change. They either change the process or they change the architecture, but not both. That has a huge benefit in terms of managing risk from generation to generation, as technology advances.
So, here is an example of a 3U processing board built around the latest Intel core I7 processor. With four cores and eight threads running at 2.4 gigahertz, the raw computing power of this processor exceeds a 173,000 DMIPS. Compare this to a common processor of yesteryear, the 7447 PowerPC processor, which gives approximately 3,000 DMIPS and it’s a huge, huge jump in performance. Now, add to this the processor has built in AV X2 vector processing engines, which provide roughly 300 gigaflops and then the graphics processing GPU, which adds another 350 gigaflops and we can see that the power in just a single board and single processor is the equivalent of many, many of yesterday’s 6U boards. As a comparison, the discrete GPU functionality of standalone GPUs tends to be higher but not integrated as tightly as a single chip solution, such as the Intel.
Lastly, to get data in and out of the processing engine, PCI express connections are typically used to communicate with other boards in the system. In today’s rates, that gives you up to 8 or 16 gigabytes per second of connectivity of a fabric that is built into the silicon, again, saving size, weight and power for extra chips that would necessarily be built onto a 3U board.
So, putting this all together, now let’s look at this system example where we have three processing boards. This is just one example of a small ISR processing system. In this particular system, we have put together three single board computers with Intel processors, used to acquire and digitize sensor input with two of the boards, and a third board being used to further process the data for analysis, display and storage. This three board combination has close to two teraflops of floating point performance and, because they are standard 3U GPX form factors, the entire signal acquisition and processing core here can occupy an area of just 60 cubic inches. The total power of such a system is under 200 watts, which is quite manageable in the 3U form factor.
So, when we look at the larger picture of a deployable system, I think we can all agree that the thermal design challenges with 3U boards tend to be much simpler than 3U boards due to the proximity of circuitry to each of the side cooling walls of a chassis, and, by using multiple single-board computers, each with its own multicore processor, the heat is better managed across multiple modules in the chassis. Then, using high performance fabrics that are typically built into the chips, such as PCI Express, and optimized software middleware, such as shared memory drivers, OFED and [inaudible 00:22:01] libraries for optimized vector processing and a standard software application perspective, we can achieve both the performance and also the ease of development and portability that we need to rapidly develop and deploy today’s ISR applications. Of course, the results of all of this is a lower overall development and deployment cost and a huge reduction in the size, weight and the power compared to similar 6U systems. Thank you very much and with that, I’ll turn it back to John.
John: Thank you very much, Aaron. Excellent presentation. Just another gentle reminder to get your questions in. We will get to them after our last speaker and our third speaker is coming up next and his name is Michael Stern, and he is with GE Intelligent Platforms. Michael.
Michael: Thanks, John and thank you Aaron and Brian for starting out so well. I’ll pick up on a couple of points made in some of the previous presentations and lead you through my little slide deck, which is probably too long and will force me to speak fairly quickly but here we go. So I am a product manager within GE and I work with a team of people who put together high performance embedded computing solutions for ISR applications.
Now, these are some of the topics I will try to cover in this short presentation, and as John said, feel free to put your questions in so we can get to them at the end. So, I will try to just outline some of the typical platforms that we try to address with our open architecture platforms. We sell both hardware and software that is optimized for these types of applications and I am going to be stressing a little bit more on the software side today. For example, in the first presentation, we heard about how to get the heat out but another way to think about it is how to use your resources really efficiently so that you are not generating as much heat for every cycle of that CPU that you are using.
So, these are some of the bullets that I will try to cover. a little bit about the value proposition on high performance computing that you see in many high performance applications like Data Center and scientific computing, typically based on Linux clusters, a little bit about how we tune for size, weight and power and cost sensitive applications in defense, and then some examples on a product that we have, which is a software environment that allows you to tune for these types of platforms.
So what I have here is, I put together a little diagram. It is fairly generic but I just wanted to use it to illustrate some points about typical sensor processing applications which might be deployed on an airborne platform or a ground, mobile platform, or, indeed, some kind of naval platform. What I am trying to show here is that, for backend processing, which is the little dotted square on the right, we had an opportunity to use an open system architecture and, as Aaron said in his previous presentation, the Intel chip sets that are coming to market today, both from the laptop end of the spectrum all the way to the Xenon server class, are extremely high performance processors with extremely good signal processing capability. The AVX engine within the core I7 outperformed some of the previous PowerPC Altivec platforms that have been in wide use for many years. Indeed, we have applications where customers have taken what used to be three quad PowerPC boards and replaced it with a single dual node 6U card and had head room to spare, for, for example, radar type applications.
The really big bang for the buck is on this next slide and that is that, by using applications that are developed on high performance Linux clusters or servers or desktop machines, we can leverage a really big community of people who are very clever and who understand how to do data processing very efficiently. So, what I am alluding to here is that, in addition to the ability to do DSP and sensor processing to extract targets from ISR chains, be it images or radar or signals intelligence, there is an opportunity to integrate very advanced analytics on the back of that and to bring that in within the Linux environment, allows you to take applications that have been developed by a diverse community of intelligence analysts and also your embedded system developers, which are the people developing the DSP type applications, and to leverage that into a deployable system and in various form factors.
So, here again, we’re typically talking about a 6U and 3U BPX. Why BPX? Because it allows you to use the very same fabrics, as Aaron mentioned, PCIE for example, on 3U and in 6U typically, 10 gigs or infiniband for very fast data movement through your multi-processor system. This allows you to do something which is called ‘Knowledge Assisted Processing.’ I won’t go very deep on this but basically it allows you to load mission data prior to your mission in a data store, which is onboard, and it allows you to know where you are and extract data from your data store and compare it to what the sensor is seeing in near real time. So, you can actually be a lot more intelligent about acting on the real time data that you’re receiving off of your system.
So, this is a simple diagram just explaining what I have been talking about up until now, the idea that stuff that is developed in the desktop or indeed in a data center can now be deployed in medium to small form factor systems, based on the Intel architecture or Intel plus GPU architecture, for example, to really greatly accelerate your application in the field. The reason for doing that is it greatly enhances operational effectiveness of these platforms. It also means that the back end processing node, which is typically a Linux cluster, can be multipurpose. It can have a multi-mission capability. It might not just do radar, it might have multiple sensors on it or it might be able to be adapted for use in different operational scenarios. That is where the real payoff comes by using these open architecture systems.
So, in addition to that GE offers a software environment that we call Axis and what it is aimed at is providing an easy to use graphical user interface along with a very high performance interprocess communication middleware library and we have introduced, recently, an open API, which is called MPI into this offering, which means that customers can use this in the knowledge that there is no vendor lock in. They can decide to change vendors, and indeed that is one of the key aspects that our DOD customers want. They want to be able to compete multiple vendors against each other and buy these boxes from different vendors at different points in time. So what it means is, you really have to be able to put together a compelling value proposition to stay on these platforms over time.
The other point I wanted to make on this slide is that, by having a hardware extraction layer along with these open APIs, it allows you to move your application, as Aaron mentioned earlier, VSIPL API libraries for DSPN math and also the MPI for the middleware interprocessor communication. It means that you are not tied to the particular hardware generation that you are at. You can take advantage of Intel’s micro architecture updates as they come to market to get the next performance boost.
Just a quick screenshot here showing the environment that [inaudible 00:30:44] offers and the real key here about this, and I won’t go into each screen, is that it allows you to see your application mapped to your embedded system in a way that allows you to optimize it’s precise weight and power. In an HPC cluster in a data center, people do worry about heat and power and performance but, within a deployed system, it is much more critical because more weight, on an aircraft, for example, equals less loiter time and less operational effectiveness and less range out in the field.
So, with the Axis middleware, the MPI implementation, again, we are offering the ability for people to develop on desktop machines and move that application straight into a lightweight, portable deployable system. Typically, the interconnects range, as Aaron mentioned earlier, we support PCIE, we support Ethernet, infiniband and legacy interconnects for power architecture as well.
Here is where the rubber meets the road. You can take such an environment and actually start developing your application before you get the embedded hardware, which means that customers can really shorten their time to market. Our environment supports people, even before they get the rugged hardware and way before they even get the lab system, we’re able to engage and show customers what that performance is going to look like. This is an example of technology insertion. S, our first generation 6U platform with two sandy-bridge generation core I7s would give you about two gigaflops per watt and, within the same power envelope now, with our latest fourth gen platform, we can deliver a lot more processing power within the same power and cost envelope and that is what customers what. They want to be able to upgrade over time.
So, just a quick slide here to show an example of a 3U multi-board system using PCIE interconnect, using sockets to move the data across our latest generation, 4th generation, as well 3U boards. This is running an image processing application for a ground vehicle ISR application.
The next slide shows a multi-board 6U VPX system and these are typically aimed at higher performance, maybe radar or image processing applications which might be airborne as well and the cooling technology, the advantage that we have with some of these systems is, both in 3U and 6U, in fact, is that these can be conduction cooled, especially if you take on board what Brian mentioned in his first presentation about how to get the heat out of the sidewall. These are not products that require exotic cooling techniques, but they would benefit from the type of cooling techniques that Brian mentioned in his presentation.
So, in summary, I hope I have just taken a quick moment to illustrate some of the advantages of using Linux based open architecture platform for an embedded application for backend processing. Again, the link to knowledge assisting processing techniques and analytics and data that might have been gathered in previous missions or from previous platforms, say satellites or other assets that might be able to enhance the operational effectiveness of the deployed platform in the field, that is where this really pays off. It allows people to integrate the real time signal processing together with very advance data processing and analytics out in the field. That is all brought together by not only the software that we can offer but the BPX form factors that are out there. So, with that, I’ll thank you very much and hand it back to John and over to the next presenter.
John: Thank you very much, Michael and this bring us to our last speaker for the day. Once he is done, we will start our Q&A session, so please get your questions in there. Now, to present our final speaker is Greg Powers of TE Connectivity. Greg.
Greg: Great. Thanks John. I am Greg Powers, Business Development Manager for TE Connectivity’s Aerospace Defense and Marine Division. I would like to briefly discuss connectivity considerations and managing SWAP and ISR systems and how things like functional density are leveraged.
So, ISR systems typically generate and manage massive amounts of data. They’re renowned for the needle in haystack scenario. They are typically very high performance and to manage the SWAP, they strive for the highest function density. This is a key point in connectivity considerations and, by maximizing the functional density, a designer can maintain high performance while minimizing SWAP. ISR systems are also becoming more and more embedded computing oriented in order to accomplish local processing. Modular scalable solutions are very beneficial in this regard, as they reduce time and cost, they improve availability and reuse, and allow multiple compatible devices and vendors to be involved in system configuration.
The primary trade when considering connectivity in ISR applications is performance versus cost. This is the C in the acronym SWAPC. We’ll focus on SWAP and functional density in the coming slides but a proven approach to connectivity definitely in influences system cost. Selecting components with signal integrity and mechanical integrity head room will ensure system performance. They can extend the life cycle of a system and potentially allow for upgrades without significant redesign. Selecting components of the module and scalable and that supports standard packaging practices encourage reuse and help to control the C part of the equation by maximizing the economy of scale and minimizing the designer’s learning curve, the qualification costs and things like that. Let’s look at some connectivity examples at the leading edge of ISR technology today.
The primary theme of this slide is increasing functional density. This means as interconnect density plateaus, the data rates or throughput is increased to support growing data demands. All of these board level data solutions are either matched impedance copper or fiber optic base. They are also modular and scalable for packaging flexibility ranging from small form factors to 6U or larger. The multigig RT2 and 2R are the lightest BPX connectors and have been demonstrated data rates exceeding 10 gigabits per second. Their dual use, meaning space compatible for all the upcoming Vita78 BPX space ISR applications, with the original RT2 being recommended for Vita 47 environment and the newer RT2R recommended for Vita72 environments. The Vita66 and 67 modules are fiber optic and RF respectively and provide very high density back plane disconnectability, with the Vita66 module featuring the MT termini and the RX modules are high performance SPN termini. The Vita62 multibeam xcelite complements the vtex data connectors as a dedicated high capacity data power supply.
Now, moving on to the Mezalok family, the Mezalok family increases the functional density for XMC applications with improved signal integrity and is referred to as the Vita61 XMC2.0. It also features significantly improved mechanical integrity over the original Vita42 components and features the mini box four point of contacts circle borne interface and highly demonstrated VGA board attached.
Finally, the Fortis Zd family was specifically designed as a next gen dual use molero connector. Again, it features a minibox context. It is modular, it comes in multiple configurations and features data rates exceeding 12 gigabytes per second. The Fortis has multiple shell and shield options, as exemplified in the [inaudible 00:38:56] connectors shown on the bottom right. The connector provides a high performance alternative for non-BPX applications. It is not BPX compliant but it is BPX compatible in that it supports many of the same packaging practices.
Let’s take a closer look at two of these products, the Vita46 compliant multigig R2GR and the soon to be released Vita66.4 fiber optic module. So, as we move along, this slide provides a better understanding of the Vita46 compliant multigig RT2R. It is really a showcase of lessons learned in the molero industry over the past 40 years. It couples the right materials for applications ranging from fighter jet based [inaudible 00:39:37] radar to ISR payloads. It has four points of contact at the [inaudible 00:39:42] interface for optimal level of redundancy. It has advanced manufacturing features like compliant pin board attached, which is capable of penetrating [inaudible 00:39:50] and it is installed via flat rock tooling. It also features a new level of signal integrity that has really transformed the next generation of ISR platforms from megabits per second to gigabits per second. When coupled with the available machine guide hardware, the Vita72 rugged and is ready for the most demanding ISR applications. So, I think that is a good glimpse of the anatomy of the BPX connector.
Moving on, let’s take a look at the soon to be released Vita 66.4 fiber optic module. Fiber optics are becoming more and more practical and deployed today and the Vita Committee has been working very diligently to enable solutions. Although able to support any BPX embedding computing applications, it’s real strength is unleashing the power of 3U small form factor constructions. It’s similar in concept to the Vita 67.1, which is the RF module shown on the lower left. Both can actually sit side by side in a standard 16 [inaudible 00:40:1] module footprint. The 66.4 features the 1MT ferrule and is presently capable of carrying up to 24 fiber optic lines within BPX guidelines. The Vita 66.4 draft standard is in the later stages of development and should be released in the upcoming year.
Let’s move on to some input/output topics and what is going on there relative to managing SWAP via functional density. So, this slide indicates to enable more data flow, the industry is installing fatter pipelines. In the lower left, you see some legacy solutions that were targeted at lower data rates. For instance, the quadrax contacts were originally designed to support 100 megabits per second ethernet. In contrast, in the upper right, you’ll see a variety of fiber optic interconnect with the MC6 M T-base connector being an extreme example of functional density. That can hold, actually, multiple MTs in a 38 triple nine style shell.
So, a primary area of interest is the fast copper connector shown in the center. These products are providing robust solutions to those seeking elevated data rates but possibly not ready to make the leap to fiber optic technology for one reason or another. To do a quick examination of the Z log fast X.
So we take a look at this fast X connector. It comes in two configurations, a single insert sized 11 shell or a four insert sized 25 shell. It has integral wire management. It is easy to assemble and field repairable. It’s been engineered for very low crosstalk and features a continuous shield through the connector. The underdome insert has eight standard 39029 size 22 contacts per insert and the hundred dome characteristic impedance makes it suitable for ethernet and a variety of other hundred dome protocols.
So, moving on, we will take a look at the internals of the connector. This slide depicts some of the physical features behind the impedance control, specifically the differential pair and shield management. It also shows the modularity and scalability of the interface, which can be populated with the eight position hundred dome insert or a variety of other inserts offering a selection of power and density. So, this interface is very flexible and fully capable of supporting things like 40 gigabit Ethernet in a very flight worthy package today.
Finally, let’s touch on some of the fiber optics solutions for use in ISR systems. Moving on to this next slide, we will see that the Dutch brand MC5 multi-way connector uses the tried and true one and quarter millimeter ceramic termini and is deployed in many systems today. It was carefully engineered for precision performance in the most adverse ISR applications. As you can see from the images, the family is also designed for ease of maintenance with a removable socket holder to allow access to the termini interfaces for cleaning and inspection. The connector is suitable for multi-node and single mode fibers and is rated at more than 1,500 matings.
Another example of a high performance and well demonstrated interface is the 83526 expanded Beam. This was originally developed for ground tactical applications and the interface can also be found in applications ranging from military airborne EW systems to commercial aircraft in-flight entertainment. The Expanded Beam technology is very resistant to contamination. It’s a non-contacting interface with some of the highest durability on the market and also, it’s the easiest to inspect and clean.
TE works with a variety of standards organizations to provide better and more fat pipe solutions. One of these is exemplified by the present work being done within the [inaudible 00:44:43] to optimize performance of the standard 38999 with a 29 fiber four contacts. So, as you can see, TE is working to assist in the management of SWAP and ISR systems with a drive to increase functional density at all packaging levels. As we say, every connection counts. Thanks and that’s it for my portion of the presentation. Back to John.
John: Thank you, Greg, and thanks to all the speakers. Excellent presentations. How about we get right to our Q&A session? The first question from the audience is for Aaron Frank. Is PCI Express the only option for multi-core today?
Aaron: Thank you, John. Excellent question. I am happy to expand on that. In the same way that we have seen the collapsing of multiple separate processing chips into a multicore processor, we are also seeing the convergence of processing and connectivity. The one common element is PCI Express on almost every processor, every GPU and every FTGA and DSB engine out there. We are also seeing those converging with networking elements. So, one gigabit networking cores are being brought into the processors, 10 gig, 40 gig. So PCI Express is not the only fabric. However I think, as Greg mentioned about the design balance between performance and cost, I’d like to add that the cost is not always a dollar cost. It is also the cost of power, the cost of integration and ease of development.
The key here is to right size for an application. So, as we look at bringing a SWAP system down and smaller in size, into something like a 3U board, once you put a processor and memory on a 3U board, you don’t have a lot of extra space to add the extra complexity of extra circuits for something that is not native to the processor. We know that PCI Express is generally used even to connect external fabric chips and, by using PCI Express, you really get the best possible transfer rates, native to the processors in the smallest possible SWAP space. To tie into that, software, middleware,to allow you to use PCI Express for things like OFED or shared memory drivers and you have the best of both worlds, the ability to use the highest bandwidth fabric as well as ease of development. So, it’s really about right sizing into a small SWAP space.
John: Thank you Aaron. Our next question is for Greg Powers. It looks like 10 gigabits per second is about it for the RT2 or RT2R. Is that a correct assumption?
Greg: Well, I think that is becoming more of the mainstream. The BPX specifications were originally designed for six and a quarter but it’s gone beyond that. People are looking at 10 pretty regularly now. We have had folks demonstrate above that, upwards of 14, but the connector is one component of the entire channel. So, it really depends on a lot of the context of the system as well, the board design, the board materials, especially the distance that the signal is intended to go. That brings the entire technology of fiber optics more to light, no pun intended. The Vitacommittee has done a very good job of provisioning for that technology to be invoked very seamlessly. So, I think that pretty much covers it. We are pushing the envelope and we are going to continue to do that.
John: Thanks Greg. The next question is for Brian. How often do you recommend thermal grease or phase change material used to reduce junction thermal resistance?
Brian: Yes, thank you. That is a good question and something that we deal with a lot. Basically, there are a lot of considerations to take when you are either specking out a thermal grease, a PCM or even a gap pad, as your interface material. A lot of that has to do with what type of heat flux you have, what type of operating conditions, whether it’s steady state or there’s transient operating conditions. So, there is not one great recommendation. It’s kind of a case by case basis but if the person asking that question had more input as to what kind of system they are looking at, we would be happy to dive in and give you much more thorough answer on what type of thermal interface material might be applicable for their system.
We have done stuff where we are implementing base use material at the actual junction level to reduce the duty cycle in certain electronics and there, you can see pretty good degrees of thermal resistance lowering by averaging that heat load out instead of sizing for the full heat load. There, we can do a lot to help, but it’s really a matter of their heat flux that they have and what their operating conditions are.
John: Thank you, Brian. I have a question here for Michael Stern. Does GE’s Axis software only support Intel architecture platforms?
Brian: Thanks, John and thanks for asking that question. The beauty of the Axis software is that it abstracts away from the hardware and, with the open middlewares that we use, including the libraries for [inaudible 00:50:08] and DSP and also the open NPI middleware, we can actually run across on a Linux, Windows and BXworks platforms and, indeed, across different architectures in the same box. So, for example, we have some 3U systems that are using peer to peer PCIE across a 3U BPX platform and we might have a PowerPC in slot one for doing some real time control type applications and then have a Intel plus and Vida CPU in the same box running the Linux, and we are able to move that data from the BXworks platform across the PCIE over to the GPU platform for number crunching and then the BXworks and slot one. You’ve got real time control for the platform.
So the answer is no, it is not tied to just one fabric or one platform. It is portable from generation to generation and across different interconnects, including Ethernet, infiniband, PCIE, [inaudible 00:51:07] rapidIO and, for people who have been around for a while, Starfabric.
John: Thank you. Starfabric goes way back, doesn’t it? Next question. This one’s not assigned, so I will throw it out there to whoever wants to take it. What, if anything, is each of you doing, well, I guess it’s for everybody, each of you doing in the RF convergence for DOD, ISR applications? Try to be as specific as to what effective solutions you are examining. So, who would like to go first?
Michael: I don’t mind taking that one first. My focus has been on back end processing on the RF side. Today, we don’t have a particular offering but what we do have is the ability to interface to standard interconnects like, a lot of our customers are using either, as Aaron mentioned, PCIE coming off of an FPGA front end, which is doing down conversion from the RF side or, in some cases, lots of 10 gig links, like from a radar front end. So, our focus is on providing the open architecture back end.
John: Okay. Thank you. Who would like to go next? Aaron, you want to give that a stab?
Aaron: I’m not sure that I can speak intelligently on that one, so I am going to have to pass on this one.
John: Brian or Greg?
Greg: This is Greg. I’ll take a stab at it, but I think, at the physical layer, we are a little bit more obvious than the other folks on the phone here and certainly we are working to provide the digital and the RF and other media, including optics, so things like isolation and whatnot are taking care of very much. So that is up to the physical area, I would say.
John: Brian, you want to try or are you good?
Brian: Sure. On the RF convergence side, we don’t get into a lot on the actual RF convergence but basically, we treat anything like a heat source and we can run thermal management on the backside and we have done a plethora of technologies and solutions there. So, again, it kind of is dependent on their boundary conditions and the actual heat loads, but we can advise there if there is more detail available.
John: Okay. Thank you. Next question for Aaron. You mentioned the Altivex vector engine and then an Intel AVX vector engine. How do they compare?
Aaron: Well both of those engines are single instruction multiple data processing engines and they are highly similar in terms of what they can do in terms of functionality. Probably the biggest difference is the Altivex engine is a 128 bit wide engine, which allows you to operate on four single position 32 bit wards with each cycle, whereas, in the Intel, the AVX2 engine is 256 bit wide engine, which means you can have eight 32 bit single precision wards operating in parallel. That being said, both prescale and Intel are looking at next generation going up to 256, 512 and even beyond for a higher level of vector processing. So, I expect that it will simply be both Altivex and AVX that continues to grow over time.
John: Thank you. Next question for Brian. How does flattening a heat pipe effect it’s heat transport capability?
Brian: Yes, thank you. That’s a good question. Flattening a heat pipe will affect the total amount of power a heat pipe can move. Basically, how you account for that is, instead of determining the power capability with the actual physical diameter, you use the hydraulic diameter of the flattened cross section and then you would run a similar power curve, which is basically trying to figure out how much the wick structure can create a pressure drop that overcomes the various other pressure drops within the heat pipe. So, the vapor pressure drops, the liquid pressure drops and the biggest one is the gravitational head pressure drops.
So, as long as you have the wick structure correctly sized to overcome the pressure drops in your system, you can move the full heat load but, by flattening the heat pipe, you are cutting down the vapor space slightly. So, you will have to take that into consideration when you are running your calculations.
John: Thank you. A question for Michael Stern. What system cooling techniques are typical for deployed HPEC platforms?
Michael: Thanks, John. So, as I kind of alluded to in my presentation, there are some very exotic techniques out there that are required for higher power per slot counts. In the products that we are bringing to market, we try to make it so that our products are conduction coolable. Aaron mentioned 3U being very good because you’ve got two card edges next to a fairly small form factor and heat source.
In 6U, what we are able to achieve by staying under about 150 watts per slot is offering a conduction coolable solution with meaningful performance in that grade. Having said that, we do offer air flow through the Vita 38.5 form factors for some customers who want to get the most out of their application, but what we are hearing from the market is that a lot of our customers doing deployed systems cannot employ special cooling techniques. There are people out there who will because they have extremely high performance requirements, but a lot of the ISR applications do not have the thermal capabilities to use special cooling techniques, at least, that is my experience. So, conduction cooling is a good way to go.
John: Thanks, Michael. The next question is for Aaron. Aaron, do you have any challenges with meeting security architecture requirements in Intel processors because of the necessary use of the Intel PCH?
Aaron: Not necessary because of the PCH. Intel does have a security platform called The Intel Trusted Platform technology for secure computing. It is very similar to, in power architecture, they have a secure boot. In ARM, they have a highly assured boot mechanism. In all cases, it’s a mechanism where there is extra circuitry added on board that can hold encryption keys, can validate signed [inaudible 00:57:30] and kernels and boots and software that is running. So, in a trusted environment, those techniques, including the Intel Trusted Platform, can be used.
There’s also techniques such as multiple root PCI domains that can be used for multiple enclave systems for secure enclave separation and built-in encryption and key management with storage technologies that are pretty common in today’s computing technologies. So, we haven’t hit any security requirements that we can’t meet yet.
John: Thanks, Aaron, The next question is for Greg Powers. You mentioned optimizing functional density as a way of managing SWAP. Why is this necessary?
Greg: Well, it really comes down to the fact that material science has not changed dramatically and there are basic things like [inaudible 00:58:15] withstanding voltage and things like that, that prevent the densities from getting much higher, in terms of connecting to the board. There is the board layout itself that is also a challenge. So, in reality, the density of the connectors themselves is somewhat plateauing. To offset that, what we’ve focused upon is the signal integrity and the impedance matching. Therefore, the functionality is increased while the density is plateauing. So, the functional density is truly increasing but this really shines a light on the need for superior signal integrity and that is one of the areas that we are really focusing on to continue to drive the systems.
John: Okay. We have time for a couple more questions here. Michael, what data and expansion plan fabrics are used for deploying HPEC class forms?
Michael: Thanks, John. So, I hope I made clear in the presentation that what we are trying to do is map high performance computing architecture straight across to embedded 3U and 6U [inaudible 00:59:25] platforms. So, in the 3U world, we are typically using ethernet for control and PCIE for a data plane on a peer to peer basis. The advantage of moving to 6U is that you get much more scalability and the software and the architecture is made for that.
So, if you think about an HPC system in scientific computing, for example, with some of the DOD labs or at the high energy physics, you are talking about thousands of Zenon multicore processors and thousands of GPUs with many cores, all linked together, typically by infiniband or Ethernet for data plane and Ethernet for control pane. Those very same architectures are what we are talking about deploying here, both on 3U with PCIE and on 6U with our infiniband and Ethernet data plan and the software that goes with it.
John: Time for two more quick questions. Is the Fortis Zb connector BPX compliant, Greg?
Greg: It’s actually not BPX compliant but we would call it compatible, in that it can actually sit on the same cardage, it can be packaged in very much the same way as the BPX ecosystem and it’s made specifically for Melero applications. So it’s really meant as a non-standard connector to open up a designer’s creativity.
John: Okay. I think Michael had a follow up on secure boot. Do you want to kind of throw that in, Michael?
Michael: Yeah, thanks. So, Aaron mentioned about the PowerPC architecture and the Intel architecture having capability for trusted execution and that, of course, comes with the chip sets and we support that as well. However, in addition to that, on some of our products, our latest fourth gen Haswell product, we also have an anti-tamper capability on the board which is an additional off the shelf starting place for customers to implement their own custom power bring up capability and also anti-tamper detection capability. It’s an emerging area where, especially for foreign military sales, US DOD integrators are looking to make sure that the platforms that they deliver only do what they say on the label and can’t be used for other purposes.
John: All right. Thank you, Michael. Thanks everyone. That brings us right up to our time limit for today. I would like to thank Brian Muzyka, Aaron Frank, Michael Stern and Greg Powers for speaking today and Advance Cooling Technologies, Curtiss-Wright Defense Solutions, GE Intelligent Platforms and TE Connectivity for sponsoring the event. This e-cast will be archived online today and will be available for one year. There will also be an MP3 version of the event available. Thank you all for attending and I look forward to seeing you in future Open Systems meeting e-casts. Goodbye.