Minutes of Weekly Meeting, 2012-10-15

1. Roll Call

Brian Erickson
Brad Van Treuren
Carl Walker (left 11:30)
Heiko Ehrenberg
Peter Horwood (left 11:35)
Harrison Miles
Tim (joined at 11:30)

Excused absence:
Ian McIntosh
Patrick Au
Eric Cormack
Adam Ley

2. Review and approve previous minutes:

10/08/2012 minutes (draft circulated 10/09/2012)

  • Heiko: we should probably capitalize “scanbridge” (write “Scanbridge” instead)
  • Brad moves to accept the minutes with the proposed amendment, Carl seconds; no objections;
  • --> approved

3. Review old action items

  • Adam proposed we cover the following at the next meeting:
    • Establish consensus on goals and constraints
    • What are we trying to achieve?
    • What restrictions are we faced with?
  • All: do we feel SJTAG is requiring a new test language to obtain the information needed for diagnostics or is STAPL/SVF sufficient? see also Gunnar's presentation, in particular the new information he'd be looking for in a test language
    (http://files.sjtag.org/Ericsson-Nov2006/STAPL-Ideas.pdf)
  • Ian: Contact Bill Eklow regarding use of the ITC mailer to promote an SJTAG Fringe Meeting at ITC. - Ongoing
  • Ian hasn't heard any more from Bill on promoting fringe meetings. Ian said he wasn't too sure if a mailshot was going to help - more visibility at the venue probably would be a better idea - a big sign by the registration desk. And a clear indication of which meetings are "open”.
  • All: Consider Adam's three points (from the action from the first weekly meeting) and suggest what is preventing us from answering those questions:
    + Establish consensus on goals and constraints
    + What are we trying to achieve?
    + What restrictions are we faced with?
  • perhaps move these points to a different location in the minutes (as a constant reminder, but outside action items); add a link to a forum where this topic can be discussed further;

4. Discussion Topics

  1. ITC Poster - What do we present this year?
    • Heiko shared a rough draft of the poster and asked for feedback / objections on the concept / proposed contents; Brad noted that this is probably the only suitable contents at this time; no objections and Heiko said he’d go ahead and add more contents for review by the group later this week; purpose of the poster would be to educate and to initiate discussions;
  2. Can we fulfill all our Use Cases if we consider a single architecture featuring an intelligent controller?
    • Adam Ley wrote by email: "Speaking for a moment to item b - can we be more explicit as to what is meant by all our Use Cases? Would I be correct in presuming that means all items posted in the white paper Volume 2 (http://wiki.sjtag.org/index.php?title=Volume_2)? Should there perhaps be some prioritization? Or some amendment considering that the last edition of the volume was 3 years ago?"
    • Heiko and others on the call agreed that the use cases described in chapter 2 of the SJTAG wiki should be our focus. Ian responded to Adam’s email with "This was meant to be a quick investigation into whether it looked feasible to address all the Volume 2 Use Cases. Yes, there probably needs to be prioritization, especially if we feel some Use Cases are not achievable via the proposed 'standard architecture', but that should 'come out in the wash.’"
    • Heiko shared wiki section 2 (http://wiki.sjtag.org/index.php?title=Volume_2)
    • {Carl left}
    • Harrison stated that support for the use cases depends on the layer they are in; e.g structural test can’t be done while you are in functional mode.
    • Brad suggested that for now we can assume we could be in any layer; we can dive into details later, for now let’s focus on "can these applications be done in an intelligent system controller architecture".
    • Harrison thinks all of the use cases can be done, was not sure what Environmental Stress Test means in this context, though.
    • Brad doesn’t think Software Debug can be done. Harrison was not sure what specifically is meant with Software Debug, but “software application” debugging seems the key.
    • {Peter left}
    • [We need to update wiki contents to remove P from P1149.7, change FLASH to Flash, etc.]
    • Brad notes that known-good software is required to be running on the target for software debug to work; there is some possibility to do software debug in this context, but it is limited.
    • Harrison finds that environmental stress test has a lot of dependencies and doesn’t seem feasible in the context of a single architecture featuring an intelligent controller. Harrison volunteered to work on a table listing the use case, what can be done with it, and what layer it fits into. Layer can be considered another measure of test coverage: as you move to a higher layer, test coverage is lower; as you move to a lower level, coverage is higher, more specific. The focus is different at different layers. Functional / application side doesn’t care about hardware specifics, it is just interested in proper functionality; test engineering is interested in hardware defects, making sure the hardware is build properly to allow the software / application to do its thing; as we move up higher in the layers our focus is shifting from where specific is a defect to whether the application is working or not.
    • Proxy; by Adam Ley:
      Even if test coverage is construed as means to detect a defined universe of structural defects, I think it is generally acknowledged that high-level testing typically has the capacity for good coverage, but that were it is lacking is in terms of automation (coverage achieved for effort invested) and in terms of diagnosibility. Further, while I think we would all acknowledge that measuring specific structural coverage that results from high-level testing may be difficult, we should not presume that our inability to measure it dictates that there is no such coverage.
    • Proxy, by Brad:
      The point that was noted was as we move to a higher layer we move away from trying to test the UUT as per its structural integrity in how it is built (specifics of the design) into more of testing the "Application". The application is acknowledged as performing some level of coverage of the physical implementation, but the intent of the testing is more from a functional perspective without regard for how that function is specifically implemented in the physical hardware. Case in point was the current use of VMs for applications that really have a virtual operating environment that rides on top of a physical environment of which the application has no real regard or idea. The application knows about features of the environment that are available, but not the specifics of how they are implemented (e.g., network port). So testing from an application level, at the highest sense, is testing the physical implementation of hardware as a side effect and not as a planned test operation. Boundary-Scan has traditionally been involved with testing the structure of a circuit. With the advanced features people are adding with instrumentation, it is now migrating to also support realms of the functional test domain. The point I was making, that started this whole thought process, was the low levels of test are dealing with the structure of the UUT and ensuring it is sound. As we move up to higher levels, the paradigm shifts from "Is the logic working correctly?" to one of "Is the function expected for the application working and at the right time?" That idea gets more abstract the higher you go up in the software layers. Harrison was comparing a system to the OSI ISO layers for networking to illustrate that the upper layers of the hierarchy don't really care about the physical implementation, but more that certain features are supported by the physical layer. Also keep in mind the discussion was in regard to a specific architecture - the distributed control where the UUT has intelligence to test itself and report back to a higher level what its status is.
    • Brad pointed out an ITC paper written by Gunnar a while back in which he stated that there are use cases where the system’s functionality is used to do structural testing ("Remote boundary scan system test control for the ATCA standard", ITC 2005; paper 32.2).
    • Heiko suggested that we all take on reading this paper as home work assignment.
    • Harrison notes that even BIST is a "red herring" - there is software self test and there is hardware self test.
    • Time ran out and Heiko suggested that we continue the discussion next week.

5. Key Takeaway for today's meeting

  • As you move higher up in the layers, the concern shifts away from the structural integrity to more of the functional integrity where the feature required by an application is working as expected and at the right time. Thus, application layer testing is more concerned about testing a functional feature than a physical implementation.

6. Schedule next meeting

October 22

7. Any other business

none

8. Review new action items

  • Heiko will prepare SJTAG poster for ITC and send out a draft for review / comments;
  • Harrison will attempt to come up with a table of use cases and there associated layer and what can be done at that layer;

9. Adjourn

Brad moved to adjourn, Brian seconds;
meeting adjourned at 11:59am EDT.