As written in Mishra is Peterman, there is a list of tasks that I wanted to complete before FSS Version_0-007 release:

...
7.Add rows to ADDTestCases table for each new pairID in operands table.
8.Run automated verification on ADDTestCases.
9.Run automated verification on MULXTestCases, adding test cases as necessary until workload is 8 hours.

To complete Step 7, I created on 2009-04-09 a Perl program, populate_with_directed_add_cases, that provided one test case template for each operand pair previously dumped. For each template, populate_with_directed_add_cases called RTPG and told it to generate one test case. Each template gave both numbers (i.e. directed) and did not use "rand".

Step 8 may also be thought of as using regression simulation to find where previously asserted verification environment correctness no longer applies. FSS Version_0-006 is capable of completing Step 8, but work was known to remain before the next release. The first problem was

Exception in thread "AWT-EventQueue-0" java.lang.NoClassDefFoundError:
org/apache/derb/jdbc/EmbeddedDataSource40
at com.alanfeldstein.sparc.fss.controller.Loader.openTestCasesDatabase
...

derby.jar was a symbolic link to
/opt/db-derby-10.2.2.0-bin/lib/derby.jar
which I seemed to recall was a special build that only supports JDBC 3.0. To fix this, I pointed to a normal derby.jar 10.4 instead.

I guessed that all test results would be fails (being stored into the results database). This failure would be caused by the fact that there is no longer an expectedSum column in the test cases database.

Despite the expected failure, a speed measurement was still possible: 39.471 cycles/second. I would be happy to discuss the reasons for this diminished performance, and also proposed solutions.

A couple of problems were discovered when I attempted to use the postsimulation application to display the automated verification results. There was a database error, but the details were obscured by a GUI problem, which I promptly fixed. The full error message was "Column 'expectedSum' is either not in any table in the FROM list or appears within a join specification and is outside the scope of the join specification or appears in a HAVING clause and is not in the GROUP BY list. If this is a CREATE or ALTER TABLE statement then 'expectedSum' is not a column in the target table."

By 2009-04-15, I had corrected several issues in the postsimulation application related to the fact that expected results are no longer to be found in the test cases database.

There are three interesting paragraphs from my 2009-04-14 notes:

AutomatedVerificationResultsTableModel.getValueAt needs to be improved. For column 3, expectedResult, the value comes not from any ResultSet, but from the SPARC-V9 Standard Reference Model. (A storage efficiency enhancement would be to only store the expectedResult into the results database for failing test cases. The expectedResult could still be displayed by the postsimulation application even for passing test cases.)

The original source of expectedResult is the SPARC-V9 Standard Reference Model, but that doesn't imply that it can't flow through a ResultSet. It just doesn't come from the test cases database. The postsimulation application has no direct access to the SPARC-V9 Standard Reference Model because FSS has completed. Therefore, expectedResult should be stored into the results database, at least for failing test cases.

At this point, I need to know whether or not expectedResult will be found in the results database in all cases. In other words, I need to explore the feasibility of the storage efficiency enhancement. Start working in dataflow order. That is, start working on the SPARC-V9 Standard Reference Model.


I added a couple of items to my task list (no commitment on when I will actually work on the tasks):

  • Try to obsolete TestBenchView by calling getSystem().getTestBench() from within DUV.

  • Enhance FullChipTestBench so that it only instantiates the SPARC-V9 Standard Reference Model in automated verification mode. It would also release the SPARC-V9 Standard Reference Model to allow garbage collection upon exit from automated verification mode.



By 2009-05-12, development of the SPARC-V9 Standard Reference Model was well underway. A failure from that day was:

Exception in thread "Thread-2" java.lang.IllegalArgumentException: Design under verification has failed to cause a mem_address_not_aligned exception

That was fixed by realizing that the Sputnik DUV has a unique way of accessing memory. A general "implementation" like the SPARC-V9 Standard Reference Model would not use that scheme. The solution to the alignment issue was straightforward. It involved having the microprocessor (reference or DUV) specify the width of a memory access in the FSS environment. Since Sputnik is less explicit, I created an FSS adapter for it.

By 2009-05-16, the SPARC-V9 Standard Reference Model was displaying instruction fetch addresses and the corresponding instructions.

Then I became sidetracked by a minor issue. Sputnik is apparently reading uninitialized memory addresses. FSS handles this (by returning random data), but I'm still curious about why it's happening. I created a memory initialization monitor to assist with the investigation.

Once I fully understand uninitialized memory accesses, I'll get back to the SPARC-V9 Standard Reference Model, which needs to at least recognize the JMPL instruction, so that it can follow the jump rather than simply incrementing the program counter.

Progress was interrupted on 2009-06-04, when I underwent surgery. After the expected three weeks of recovery, I'm just now getting back into Cosmic Horizon.