Minutes of Weekly Meeting, 2008-09-15

Meeting called to order 8:20

1. Roll Call

Eric Cormack
Carl Walker
Peter Horwood
Brad Van Treuren
Ian McIntosh
Tim Pender
Heiko Ehrenberg
Excused
Carl Nielson

2. Review and approve 9/8/2008 minutes

(moved by Eric, Second by Ian) - Approved

3. Review old action items

4. Discussion Topics

    1. Results and Status from SJTAG Survey Activity
      • [Ian] This is actually dead right now and can be taken off the agenda.
      • [Brad] Are we going to put a composition of the graphics of the results on the web site somewhere?
      • [Ian] Yeah, I haven’t gotten around to it so far, but it is certainly something that I am going to. At the moment I have a note up saying the survey is closed. I think I will put a link on that page to show them the resulting graphics on some page.

 

    1. Status and review of white paper sections
      • Overview:
      • [Brad] We did talk about the overview section is probably stable right now.
      • [Ian] I think so. I feel you, yourself, Brad, need to have a final look at it as well.
      • [Brad] Once I can look at your new UML this week once my tools are reinstalled, there may some things from that which we may want to throw in there as well.
      • Use Case: None.
      • Hardware: None.
      • Language: None.

 

  1. System Data Elements Continued Discussion
    • - Discuss homework of the need for descriptions vs. automatically detecting the information directly from the system data provided from other automation tooling (e.g., CAD)
    • [Brad] What I want us to spend some time on first before we get into Gunnar’s slides is discussing whether there is a need for descriptions that fall outside of the information that we currently have for the tooling vs. information that can be automatically gleaned or mined from the available information to build up what we traditionally do in something like an HSDL. As we discussed in the past, many of the tools will actually construct up the topology of the chain based on the netlist information in some predetermined information of intelligence in the tool for how to look for gateway devices and stuff like that.
    • [Ian] One of the things that went through my mind thinking of this is that from the perspective of perhaps an OEM board vendor, I wonder how comfortable they would be with releasing a board level netlist information with an alternative of maybe giving a higher level description. It is just on these intellectual property/security type issues. I know I have said in the past that there isn’t an awful lot of IP given in a netlist when you get down to programmed devices. I could see more of an attraction to an HSDL type description rather than working straight off of a raw netlist and bills of materials and so on.
    • [Brad] I think just having the HSDL type information of the topology of the chain in itself is not as useful for many of the things that have to be done as one might think.
    • [Ian] I agree with that. Certainly, HSDL as it stands isn’t going to give you enough, but I just wonder if there is a way you can… I am sort of thinking like the way a BSDL file doesn’t necessarily tell you everything how the silicon inside it is wired, but it tells you enough on how to use it for a boundary-scan application. If you can get to the same level of abstraction in describing the board without needing to describe in too much detail exactly what’s in it, it’s kind of like an object orientation model of something which you’ve got a description of something which tells you what properties it has and what methods you can apply to operate on those properties, then you’ve got enough to work with it. You don’t necessarily know exactly how it behaves internally. I may becoming a bit too abstract in thinking this way. It fits better than to try to build up a hierarchical model of what you are trying to deal with.
    • [Brad] This is some of the reason why we at Alcatel-Lucent, really the Lucent days, wanted to develop a concept of boundary-scan plug-n-play so we could just have a repository of pre-generated tests that were available for a board that we could extract in a uniform fashion from a design and then be able to deal with changes of boards and new boards that were put in without having to have the detailed netlist available for the board itself. Everything we needed to be able to use to apply the test was self contained on the UUT in a form that we could extract and apply.
    • [Ian] Yeah. You can extend that a little bit further as we kind of touch on last week where you could also include what operations are allowable at the board boundary: what lines you could drive and what lines you could sense so that you can then start to create it up into higher levels.
    • [Brad] I think that is going to be an interesting description. I don’t think it is going to be that difficult of a description, but I think it is going to be something that is a whole new way of thinking of generating tests because it is going to be more of a real-time test generation.
    • [Ian] But isn’t that really what you are getting at when you start talking about the dynamics of the configurations of systems and partially populated system?
    • [Brad] I think we are going to need to have some more intelligence in the test execution engines, possibly not generate things, but more of compilations or assembly of what can be done at run time.
    • [Ian] An aggregation of things already developed.
    • [Tim] Ian, when you first spoke I got the sense when you wanted to add abstraction to the netlist, I kind of got the idea that you are thinking the netlist would be available on the product itself and that somehow the software would be sucking in that and developing or creating vectors on the fly and that was the need for the abstraction, but I don’t think that’s the case anymore.
    • [Ian] No I wasn’t necessarily thinking that the netlist was immediately available. At some point in the development of the operations that you’re doing through the boundary-scan, if you don’t have that other type of description, then at some point you’re going to have the netlist available and I was just thinking about I could see the concerns raised by an OEM board vendor at handing over what is fundamentally the design of the board to let someone else develop tests for it.
    • [Tim] Yeah, but you have NDAs with your contract vendors.
    • [Ian] If you are getting boards build for you by someone, on a specific program, you would probably do that, but I could see a specific situation if you want to get into more of the consumer type electronics environment you could end up in the case where you have low value boards where you really don’t want to start getting into all the contract issues with NDAs and so on. Maybe I’m just thinking about something that wouldn’t really happen in practice.
    • [Brad] I know we have had the case with some of the ATCA vendors that they actually committed to giving us netlist information of their boards, because they realize being able to integrate with mezzanine cards and things like that it is necessary to have netlist information to be able to generate interconnection tests. I don’t know if they would do that with every customer that they have.
    • [Ian] I don’t know what the answer either would be, Brad. You could understand that OEM vendors would be more comfortable working with an abstract description rather than the detailed design information.
    • [Brad] Yeah. The other alternative would be to have some sort of a format that is similar to the way device designers have now to be able to provide an encoded representation of their circuit that you can bring in for simulation and stuff like that. Thus, you can’t really get access to the details of what’s inside it, but it is useable for simulation and things.
    • [Tim] Similarly, there would also be a need to encrypt the native programming files rather than have all those things sitting out there pretty much for anyone to pirate or corrupt. There might be a need to standardize some kind of an encryption method at the system level.
    • [Brad] Yeah. Does it sound like we have everything we need in the CAD information and the available stuff that comes form the design group or are the additional things we add all the time as we get to test development.
    • [Heiko] I think it depends on the netlist format because sometimes we get a netlist format that uses stock number or some weird numbers for device types and then, based on netlists, you would not know what type of components those are. If you don’t know what type of components you are dealing with, you can’t really generate a test. At least a BOM might be needed or some mapping between those numbers and device types.
    • [Ian] I think the other thing that really wouldn’t be necessary all that evident in the netlist information is the affect of any analog circuitry related to power supplies and so on unless you also start introducing spice models and things like that for analog parts and any mixed signal bits could be open to misinterpretation.
    • [Brad] Well, that brings up the point what I see a lot with tooling is the definition of what nets are the power and what nets are the grounds based on the naming that is being used. Every one of the tools that I have seen have had options available for the user to specify in the naming into the tools for those. I think this is one thing that we are going to have some sort of description on as applied to the tooling to give more intelligence.
    • [Ian] That may get you the power supply issue, but if you have something like an analog comparator that is supplying a TTL level into some other part of the circuitry, that is something that is not going to be quite as easy. You can’t tidy that away under things like power supplies because it is actually doing something quite different. In my experience you always end up with some nets that need to be constrained in some way and you need to be able to describe this to the tooling. I don’t think you will ever get to the situation where everything is going to come out straight from the CAD files.
    • [Brad] I would agree with that based on what I have seen so far.
    • [Brad] The other piece that I think we have talked about before is the point of the connector to connector mapping. The naming from one side to the other side can be quite different and the number of pins used could be quit different – it could be a subset of the other connector. I don’t think that type of information will be automatically given unless we can state the connection is based on some sort of a standard that is used in the design.
    • [Brad] Do people feel we need to provide some mechanism that says this is how you control a particular gateway or linkage device to manage the chain or do you think that is something that should be outside of the scope of SJTAG and left as an exercise for the tools to support?
    • [Heiko] Are you talking about the actual scan chain bridging devices?
    • [Brad] Yes.
    • [Heiko] I think if they have a standard protocol or standard behavior, they can probably be handled by the tool since the tool can handle that already.
    • [Brad] The problem is we don’t have any standard interfaces. We have ad hoc standards or defacto standards that people are using right now.
    • [Ian] I think that situation leads to varying support by tools depending on which tools you are actually using. We’ve run into this with some of our work share partners where they’ve used a configuration of devices that are supported by their tooling and not by ours and vice versa.
    • [Heiko] I guess especially if you want to use some programming tools like Xilinx or Altera they wouldn’t be able to handle it.
    • [Ian] No, they fall over with things like that at the moment.
    • [Heiko] Well then they couldn’t be able handle some sort of SJTAG format either.
    • [Ian] I would be inclined to think that we wouldn’t be expecting any kind of vendor tools to support the SJTAG. We would just be looking at third party all purpose tools if you like. The device vendors, it is not really their business to support anything other than their types of devices. As soon as you add non-Xilinx parts in your chains, you can’t really expect Xilinx’s to support them because that is not the type of business they are in. Certainly, people like yourself, Goepel or JTAG Technologies or ASSET or whoever, those are the people you go to when you want a tool that has to deal with a board with a mix of parts on it and you would expect that tooling to deal with it.
    • [Brad] I would expect, Ian, that the tools from Xilinx, Altera, Lattice and all of them would be able to deal with standardization in these standards, things like BYPASS of other vendor’s devices in being able to seamlessly interoperate with those devices in the chains based on what they know of the standard.
    • [Ian] Yes, if you’ve got a standard there. Yeah.
    • [Brad] If we actually define some way of ensuring easy access in a well defined manner in SJTAG, I don’t think it would be out of the question to have these tools be able to support those kinds of connections. It is just the ad hoc stuff that is non-standard that is out of the question.
    • [Ian] Yeah. That is probably right.
    • [Brad] And that goes for the emulation tools and stuff like that as well.
    • [Peter] I agree, Brad. If it is written down and is part of the standard, they are going to have to deal with it then.
    • [Brad] Otherwise, you are going to have to jump through hoops like we did in the early days to support access to their devices in providing their own dedicated paths.
    • [Ian] Yep. And that is certainly something that has caused us a bit of pain over the years.
    • [Brad] That begs for another question. We have all had to create isolations in our circuits to be able to handle these kinds of cases in the secondary chains. Do you people feel that SJTAG should have some sort of standardized way in dealing with this isolation so we can be sure we can have some sort of a chain that can operate independently as well as part of an overall topology or is that something outside of our scope.
    • [Peter] The only area where it might get tricky, Brad, as we spoke before is if we specified only an API or some of that nature, you’re getting down to the nitty gritty of the gateway device and how it functions and therefore you are putting the ownice on all the manufactures that we all have to support this particular mechanism that is going to be specified. From my side, I don’t see that to be too much of a problem, but it would not happen overnight I don’t believe. Does that make sense?
    • [Brad] I am curious what people think as whether this is something SJTAG should be worrying about or is that be outside of our scope?
    • [Heiko] Part of SJTAG is to apply the standard board level type applications which may include FLASH programming and sometimes worthwhile to have the first CPU device where the TCK rate can run faster than for the CPLDs or FPGA. So keeping the CPU in a separate chain than the FPGAs makes sense. You might want to have one or more devices in a separate chain for certain applications.
    • [Brad] I think this may be too early to be asking that question. Obviously, this is something that could be a guideline or a notice in the standard and not necessarily a rule. I think it is something that we need to educate the community on about the need to be able to support these kinds of features and at least consider it when we do our domain analysis.
    • [Brad] Is there anything else? I sound like everyone is feeling that automation that is 100% from the CAD information and various other sources is going to be insufficient to be able to support SJTAG level testing. I think we are all leaning to some level of description that is additional and augmenting what is available for the provided information to be able to give some intelligence to the tools to effectively create the tests and access that we need to have at an SJTAG level. So we can’t have something that is totally turn-key, we seem to have to have some sort of human intervention.
    • [Eric] Yes.
    • [Tim] Unless there is a way we can actually put hooks into the CAD data that would have some kind of SJTAG attributes that can be applied and you then know these are some kind of special function that can be used later on, not really used by the CAD tools, but some kind of a place holder that could be used down stream. Similar like, for instance, when you create a schematic you put a net property, say this net is a power node or a clock node, so your down stream layout tools use this information to decide the trace thickness or match lengths. Similarly you could provide some sort of attribute that could be used by our SJTAG tools like your analog attribute where you could say that you apply some level and you get some value out.
    • [Brad] It is certainly something that we could entertain. As Ian pointed out last week, we are missing one group of people that would be quite useful for these type of discussions is the EDA vendors. They would be the best ones to answer questions like that.
    • [Ian] The idea of attaching a property does seem like a reasonable one. If your tooling then extracts the netlist, if it knows the properties are formatted in a particular way in there, they can go looking for ones that are SJTAG properties.
    • [Brad] That is also dependent on whether or not the designers take the time to populate that information.
    • [Ian] You have got to make sure the designers take the time to put down the information for boundary-scan to begin with.
    • [Brad] How often do you have to do rework because of missing nomenclature.
    • [Ian] That is it exactly. Most times these have to be peer reviewed anyway. It is almost like part of the design review process is to make sure the SJTAG properties are attached.
    • [Ian] I could see how it could work. I can also see how it could be very difficult.
    • [Peter] It would be automatic though if it would be with how to describe the attributes on the nets and everything else. Then you could build up your hierarchical system. It would be possible, but then you would need your netlist sent out to many different people and I think many COTS vendors would push back on that type of situation.
    • [Ian] Maybe there is a half way stage in there where you can extract if you like a portable description format that is derived directly from the CAD data that may then be passed on to the end user.
    • [Brad] Or even an XML format.
    • [Ian] Yeah. Exactly.
    • [Peter] It wouldn’t be hard to have a script that gets passed a netlist that just looks for SJTAG type attributes and you would just build your information up from that.
    • [Brad] I think that is some good information and I am just looking at the time and I would like us to get through Gunnar’s presentation. It isn’t that long, but it would address other issues in our language.
    • - Gunnar's STAPL++ (http://files.sjtag.org/Ericsson-Nov2006/STAPL-Ideas.pdf)
      • I will try my best to represent what Gunnar was presenting to the group.
        My feedback to Gunnar was captured in:
        http://files.sjtag.org/Ericsson-Nov2006/STAPLppFeedback.doc
        As you will see, Gunnar presents a strong case and demonstrates clearly why an object oriented perspective is necessary to represent entities at the system level. He also presents a strong case for why current vector languages are inadequate.
    • Slide 1
    • [Brad] I am sure I am not going to do this as good as well as what Gunnar is able to do and I am trying to represent something Ericsson has done with an MSc summer student that was working for Gunnar in providing this capability. I will give it my best shot and I think we can at least glean the important information from it. If there are any questions, I am sure we can forward them to Gunnar and he can respond to us in email as that is a better forum for him right now.
    • [Brad] On the first slide, this is all about a new player strategy based on STAPL, but it is STAPL with extensions. It looks somewhat like STAPL, but not totally like STAPL. This is some ideas that Gunnar had shared earlier on in 2006 with some of the members of the SJTAG team to show some ideas on program design and different data structures he is proposing to alleviate some of the problems he is having with ASIC level testing.
    • Slide 2
    • [Brad] So I will move to slide 2 that is labeled Why STAPL? His reasoning behind using STAPL is that it is already being used for embedded test and for providing unique test factors and programs to boards. The real advantage in both STAPL and SVF is that you can have generic software that is able to apply these vectors to the boards themselves so as you go from board to board you can reuse the same software to be able to apply those tests. STAPL, even though he says is a de facto standard and widely used standard, there is a JEDEC standard for the language, whereas SVF is a true ad hoc standard/ de facto standard that people are using. So Gunnar’s feeling is that STAPL is probably better supported from an aspect of standardization than SVF is right now. The third item he wanted to ensure is that this has a lot of software behind it from a couple of different vendors to provide freeware players, both at an ASCII level and at a binary player level with things like byte code compiled code to be able to apply STAPL based programs. The other key thing is there is a migration path to go from SVF to STAPL. I don’t know if anyone has used it, but I know Altera has one that is called svf2jam where JAM is the name of the language Altera called STAPL prior to it becoming a standard. I think there is another one available as well. Taking an SVF path to STAPL seems to be quite feasible. One of the things Gunnar noticed is that all of the test generation tools are able to produce some form of SVF out of their test generation process, so the interconnect test and traditional testing that we do within a board can be produced into SVF and then converted over to STAPL and be able to be applied through a STAPL player instead of an SVF player. STAPL can represent vector information just like SVF can in a sequential format, but it also provides things like variable and some flow control constructs so you can do some things like dynamic vectors. I know I have found that to be quite useful for certain interrogation operations I need to do in the field like I can write a STAPL program to behaved based on the results of some value I have been able to read from the UUT which is something I can’t do with SVF.
    • Slide 3
    • [Brad] I am moving to slide 3 which is called BScan Vectors and Test Control SW. Basically, what STAPL provides is a vector language capability and the primitives that are necessary to represent vectors and apply vectors, but it also adds in the aspect of flow control to be able to make decisions based on the results of those particular patterns to be able to change the way that you apply the patterns from that response you have gotten. Things that Gunnar wants to be able to add are these extensions to be able to manage the instantiations of these components and to provide some level of control in the DFT process for IP that people are providing in their tools which is more of the instrumentation that people are starting to add into their devices. There’s this whole concept of parallel procedures that are being talked about, especially in the 1687 organization, where you’ve got certain logic blocks that are replicated in a design that have some level of testability into them, things like built-in-self-test features that you would like to be able to run in concurrent fashion to be able to take advantage of the concurrency in the time of operation. So there are things that can be done in parallel that the language needs to be able to address and at least identify and specify to the programming tools that this can take place. Right now the current languages don’t have that. The nice thing is that you can reuse procedures in STAPL for other items that have to take place and that can simplify the test programming. It is very easy to call a procedure and write one procedure do to initialization or an erase sequence that can be reused for various types of operations. Also, there are some interactive operations that can be possible to be able to deal with the integration and verification labs to be able to allow people to do certain things at certain times, so a language would be nice to have something like that. STAPL provides the PRINT statement even though the standard does not recommend the use of PRINT for any general use of the language. There is also an EXPORT statement where you can define keywords with values of bit vectors or integers to export out to the calling software to be able to deal with communications in the native language of the tooling that is being used as well as to provide synchronization mechanisms. What Gunnar is proposing is being able to put unique STAPL++ programs on each board similar to what we are doing with STAPL or SVF right now and that you would use a common STAPL++ player to apply these tests. So it is very similar to the flow that all of us are using right now, but it is just a different language that is expanded in what it is capable of.
    • [Tim] One thing that comes to mind is if you have a bunch of boards in a system that were pretty much were all identical boards and there were long erasure cycles going on, one advantage might be if you had some kind of a broadcast feature where you actually program them all concurrently. Maybe the ISC format makes more sense for that. Is there any kind of ISC that does STAPL conversions that can happen?
    • [Brad] I’m not familiar with any right now. Programming tools provide some level of translation to STAPL to do programming of their devices, so you can do ISC through STAPL. Depending on what tool you use, there are benefits with the Altera player, there are benefits with the Lattice player, and then there are benefits with the Xilinx flow to provide something like 1532 support, but none of these players that I have found today can support the full capability that 1532 supports in its standard. The goal is to be able to provide some level of concurrency available in programming or configuring the FPGAs and CPLDs in a design.
    • Slide 4
    • [Brad] This is just giving an overview of what his MSc experiment was set up as. They have a PC based software test manager in the terms of SJTAG that was running some boundary-scan and debug software as well as some of Ericsson’s test and control software. There was an Ethernet link between the PC and the unit under test. There is a test controller that is residing on one of the boards within the system and that is actually applying or where the STAPL++ player resides. There is a driver that goes to the hardware to do the protocol conversions to 1149.1. The kind of commands that Gunnar is using between the test manager and the test controller you can see there. There is stop on fail, run an action, run a particular procedure. The failing vectors and the execution trace can be returned. The software you can see he is basing it on some of the Altera STAPL player, but they now have a totally new rewrite of the STAPL++ player that is more efficient and more optimized. So they have their own stuff that is running now. Basically, what I wanted to show you on this slide is that there is a setup that is controlling from an external source as a test manager and the STAPL++ player is just receiving instructions about what STAPL++ program should be applied in the system based on what is told from the test manager. What you can see, which will be on a lot of other slides, there is an ASIC B and ASIC C that is installed on the board he is testing. He is going to be focusing primarily on the ASIC C in the slides we are going to see coming up.
    • Slide 5
    • [Brad] This is just a bigger picture of what the hardware topography looks like. We have this applications board, which is the board going to be the UUT and the test controller platform that is connected to that to be able to apply those tests at a system level.
    • Slide 6
    • [Brad] The take away from this slide is that there is some sort of a mapping to the serial chain of where each of these devices reside and that there is a TDO segment, a TDI segment, and a MASK segment in the STAPL++ language for these particular devices. So the point that all of us know is that you have a concatenation of these data registers to give you the full register data that is going to be applied in the full operation of the language. Right now the way we deal with things is that we translate with tooling the device specific information and glue it all together in a board level description or even a system level description of a single chain that is all wired together and then apply that all at once in a batch operation as one vector and read back the response. We are losing the information as to where each device resides within that segment. That is what he is trying to highlight here.
    • Slide 7
    • [Brad] This slide is just trying to show pictorially what the block diagram was trying to show as the schematic before. Now what he is trying to show is that he has devices in the chain that just have a BYPASS in the chain following a reset in there and there are other devices that have ID codes so you have the ability to do an IDCODE check on it. His ASIC, on the left hand side, has the bypass, the idcode, and the PLL check. There is also memory bist controllers and logic bist controllers. I think the number was 55 MBIST and something like 19 LBIST controllers within this one ASIC. So when you start looking at 55 blocks that are replicated in one ASIC that are opening themselves up to parallelization, you can see that Gunnar really does show a good case for the need to be able to do parallel operations within our language.
    • Slide 8
    • [Brad] This gives a little different perspective of the whole idea that this scan chain that we are being able to apply is really made up of concatenation of data registers within each of these devices in an ordered fashion based on what the netlist topology is giving us. So we have one device that is going to be in BYPASS, which is one bit. Another device has its LBIST activated that is shown in red. We have another MBIST in a device that is activated that we want to be able to control. The whole idea that Gunnar is trying to show here is that it would be nice to show with some description in the language, that he is showing with a PARALLEL block in the language and ends with the END PARALLEL block. He is introducing a similar construct that is in the STAPL language right now that is the CALL statement. Instead of calling a global scope procedure, he is now calling procedures that are more like methods of an object. So you create and instance of component 3 and call it component 3 which represents that lower left device that has the BYPASS and you call its BYPASS procedure to setup and configure the BYPASS information. With that comes to the player the information of where it is located within the chain. Then you have a call to C1’s LBIST procedure that is going to configure the LBIST information and you would be able to identify to the player where it resides in the chain. And likewise with C2, the MBIST. So there is additional information that would be provided to the player at the time when you make the call to these instantiated methods to begin to letting the player be able to assemble the vector and be able to apply it over time.
    • Slide 9
    • [Brad] This slide just shows how the devices are wired together and this is in a multi-drop environment where he has a SCANBRIDGE device installed on here. So he is able to communicate with each of these devices using his STAPL++ player through the multi-drop architecture.
    • Slide 10
    • [Brad] This gets into the whole meat of this. Slide 10 describes the typical STAPL structure for those that don’t know what STAPL is right now. There is a NOTE block or NOTE set of statements that you can use to describe particular notes about how the program and the hardware you are trying to test that can be preserved as an artifact of the test. Then there are ACTION statements which define things that you would be able to do. Things like run program, or be able to just say PROGRAM for programming a device, ERASE, VERIFY. It could be an action of PROGRAM which has sub-actions available to it. So this is really the entrance point to a STAPL test is to define what ACTION you want to be able to run. Having multiple ACTIONS allows you to use one STAPL source file for multiple capabilities really leverage the ruse factor to be able to be able to keep one copy of the source in the embedded environment and be able to apply multiple things with that same source file. The PROCEDURE blocks are defining a set of statements that have to be executed in order and it is typical programming procedure that you would call to do certain operations. The interesting thing is how it ties with the next one. You have global DATA blocks and DATA blocks that are associated with specific procedures. These define what variables are going to be used within the scope of the procedure or global scope. I talked about the PRINT and EXPORT statements before so we don’t need to go into that. Of the flow control statements, there are a lot of them. Most of the time you will see the FOR and IF statements used mostly in the language. It does give a lot more power than what you can get with a language like SVF. The drawback is that when you do a scan, you have to save the response data in a Boolean array. You then have to test the response through one of these flow control items, so it is not easy in the player to determine if this comparison is just a generalized comparison based on the result of a scan or is this just a general comparison done to further identify when there has to be a change in the flow of the program. So it is not real easy to be able to correlate and associate that this is a response that is being tested with a known good response value vs. just being able to look at data patterns itself. This is one drawback with STAPL I have found. With SVF you get a response vector back that is directly associated with the scan vector being applied. The CRC statement is the last line of the file and it gives a CRC check code so that at runtime you can determine whether there was corruption of the program before you apply that program to the hardware.
    • Slide 11
    • [Brad] The key things, that Gunnar is adding to the language, are shown on slide 11. This is where he adds in new blocks. The first is the structure mapping where he has the class and instantiate maps which will map these components and their hierarchical blocks to their procedures and data. It is very similar to languages that you see in C++, Java, Python and others, where you define a class and the components it represents and you define these procedures that are available for that particular device. And then there are instantiation statements that will create up for a particular type of class, components using particular names similar to what VHDL and other software languages do. So you can say that this particular component is of this particular component type. Then you can access particular components by using dot hierarchical pathing and access the variables that are defined as part of the class. And that is where it is very similar to Python and languages like that. To call a particular instance, you would use a dot separated path nomenclature in the hierarchy to specify that that takes that particular call is calling that particular procedure for that particular class for that instance. For parallel execution, we already talked about. You would use the existing CALL statement to call each particular instance. The important thing here is that the IR scan and DR scan operations are really concatenated together and synchronous. So even though you have multiple calls taking place, these are done in parallel so that when you get to the point you are going to do a scan operation, the synchronization mechanism is the scan operation. So as you are doing your CALL procedure, it is going to set up the Boolean array information as necessary and when it gets to the scan operation it will do its scan at the point the other loops are going to do their scans to provide the synchronization. As far as backward compatibility goes, it is adding language features and not removing any language features so the old STAPL programs should work and do work with this new STAPL player as is with no change required. There needs to be some sort of a top class instance that is going to be implicit in that case for the legacy applications. For the compiled STAPL, any STAPL program can be compiled to this format so you can get things like EVF and BVF vector so all these can be translated into a STAPL type of program construct as a translation mechanism to be able to be applied with this new player. The only problem you would run into is with STAPL programs, you can’t really apply these as EVF or BVF or even SVF 100% of the time because if there is any kind of flow control in a STAPL program, you are not going to be able to deal with any type of change required to the vector sequences. I think what he is saying is that there are certain subset of programs that could be applied to existing subsets of players that are based on SVF, but you have to be very careful of what you are doing.
    • Slide 12
    • [Brad] This is giving an overview of the building blocks that can be applied. There is one thing that he does show here that ties with the last item that we talked about there. There may be a possibility of having an SVF output from this player, but typically you are going to access the hardware directly. If you do implement the SVF writer then you are only going to do the subset of commands and not the flow control aspects that are going to change the way the vectors are to be applied. There may be some way to get the concurrency applied through the SVF with certain applications. Basically, there is this command interface, you have a parser that reads in the STAPL++ code and that is from the internal STAPL++ structure. You then have the STAPL++ player that is executing it that applies it to the hardware in the order of the statements that are being observed. Nothing really new from what we know with the existing players.
    • Slide 13
    • [Brad] The key here is slide 13 that I wanted to get to. What Gunnar is talking about is the player itself is really an event queue and so as it executes the statements and it does the calls that you see with just the green circle with the C in it, that’s just doing a straight call. If you have a CP, you have a call within a parallel block. So if you do a straight call, it will be just like what STAPL is now. You get particular statements that are called to set up Boolean arrays followed by shift operations that are represented with the I/O circle. That is followed with some type of flow control statement to determine whether there was some type of error or not. If there was an error, you would then use the EXPORT or EXIT statements to terminate the program or send an event back to the calling application to say, “Hey there was a problem that took place.” Otherwise, you could continue with the execution flow by coming out of the flow control bubble into the normal flow chain. When you get to the CP you begin to execute in separate threads. So the first call in the parallel block you get similar things that were taking place in the first column, but in this case it is not going to apply the I/O until a later point. That point is based on the I/O is going to be addressed with the second CP. So, there will be flows taking place that are going to queue up the first I/O in the first thread and then it is going to queue up the second I/O in the second thread. Then all these I/Os are going to come along in the event queue saying we have this one CP and a second CP followed by a list of I/O. If it finds a list of I/Os, it is going to concatenate all the I/O information into a single vector and apply then together to the driver. The flow control is then later going to be checked later on in the event queue following when those particular scan where those two I/Os are taking place. So that is how he is using the event queue mechanism in how he deals with the synchronization of the calls and the I/O operations. Really, that is the end of the language description, but it gives you an overview of why there is a need to be able to do this parallelism that is lacking in the current languages.
      Is there any questions?
      Overwhelmed?
      [Chuckles from the group]
    • [Brad] As a follow up to this, I did respond to Gunnar when he first presented this information. It can be found in the word document I reference on our sjtag.org site. There are issues, especially when you get into the synchronization issues that have to deal with in-order and out-of-order problems that are not handled in his language. I am not convinced yet that there is a turnkey solution available with this language right now. I don’t think Gunnar would disagree with that either. It does address the issues at hand and shows the need for these types of features.
    • [Brad] With that, is there any further discussion people want to have with this or do you want to just digest it this week and talk about it more next week?
    • [Ian] I think it takes a bit of absorbing.
    • [Peter] I’d say next week to give everyone a chance to absorb and digest things.
    • [Tim] Does he have some example code that we can look at that is not proprietary?
    • [Ian] I think there was some, but not on our SJTAG web site.
    • [Brad] I think there was some shared at the Nordic Test Conference. I can ask Gunnar if he has some he can share.
    • [Ian] I think that’s where I saw some.
    • [Brad] I will put that as an action for me to contact Gunnar for some reference source code.

5. Schedule next meeting

Wednesday, September 24th, 2008, 8:15am EDT
Monday, October 6th, 2008, 8:15am EDT
Monday, October 13th, 2008, 8:15am EDT
Wednesday, October 22nd, 2008, 8:15am EDT
Fringe Meeting at ITC Thursday, October 30th, 10:30-12:30 PDT.
Monday, November 10th, 2008, 8:15am EST
Monday, November 17th, 2008, 8:15am EST
Wednesday, November 26th, 2008, 8:15am EST

6. Any other business

  • Poster session, slides, poster, handout
    • Need to confirm what to have on POSTER slides. Slides for session 19 due Friday. Need to reduce number of slides. Put on agenda to discuss some suggestions for handout.
  • Need to close out scope and purpose soon.

7. Review new action items

  • Brad contact Gunnar for some reference source code
  • Brad contact Rohit to find out what is available for access at Fringe Meeting.
  • Brad to assemble discussion on ITC poster session

8. Adjourn

( Moved by Ian, Second by Carl W.)

Adjourned at 9:50