System Test Home
System Test Tutorial
Past Events
|
Panel on System Test at ITC 2002 - Can System Test and IC Test
Learn from Each Other?
- Organizer: Scott Davidson - Sun Microsystems
- Moderator: Ian G. Harris, U. Mass, Amherst
- Panelists:
- Tony Ambler - UT Austin
- C. J. Clark - Intellitech
- Scott Davidson - Sun Microsystems
- Rocit Rajsuman, Advantest America R&D Center
- David Williams - Dell
- Description:
Traditional test approaches focus on the testing of a particular class
of hardware designs. The system test problem is distinguished because
it involves the testing of the composition of several heterogeneous
components. System test has always been important to ensure product
quality, but the system test and IC test communities seem to have grown
further apart. We have been debating the relevance of functional
test, the closest thing in the IC world to system test, for some
years.
This panel is designed to see if there is cause to bring these two
communities together and identify common ground. System test has
never been driven by fault models. Can IC test learn from this, as
real defect diverge even further from stuck-at faults? IC test is
heavily metric driven, relying on fault coverage to improve quality,
while system test metrics are driven by a few faults physically or
logically inserted. Some system tests leverage IC level DFT, such as
BIST. Can more IC level DFT be reused at system level?
The panel will include system test people who can
describe their experiences, and IC test people who are dealing with
system test issues. We hope that this panel will lead to the
application of techniques from each area to the other.
- Panel Summary:
There was general agreement that the state of system test today is
similar to the state of IC test 20 years ago in terms of automation
and standardization. During the early days of IC test the use of
standardized fault models was not an accepted practice. Instead, ICs
were tested functionally, relying on the skill and intuition of
experienced designers to generate thorough test cases and ensure
design quality. Since IC test was dependent on design functionality,
IC test goals varied from project to project. The test generation and
evaluation process was heavily dependent on designer knowledge and
little automation was possible.
System test today is is also performed functionally, without
standardization, and it also depends on manual effort. In many ways,
practice in IC test is seen as being more mature than practice in
system test. Some of the reasons for the difference in maturity
include the sheer size of the system test problem, as well as the
heterogeneous nature of systems. A typical system (i.e. a desktop
computer or an automatic IC tester) contains both digital and analog
circuitry, software components, and mechanical components. Testing the
interaction between this wide range of component types is clearly a
complex problem.
There was some discussion on the reasons for the differences between
system test and IC test, and ways to move system test to the level of
maturity found in the IC test community. Several panelists highlighted
the use of accepted fault models (i.e. single stuck-at) as a key
component of the successes of IC test. Fault models enable
standardized evaluation of test patterns via fault simulation,
automatic test pattern generation, and automatic DFT
insertion. Although fault models were seen as being useful, the
panelists were generally pessimistic about the possibility of
developing standardized fault models for system test. Even classifying
all of the potential faults in a system is a monumental task because
system implementations are so heterogeneous. Panelists and audience
members noted that in practice, companies building systems do maintain
a log of the different faults they have observed and develop tests
accordingly, focusing on detection of the most likely and detrimental
faults. However, the fault lists maintained by individual companies
are far from a standardized set of fault models which could be used to
support automation.
|