Nice TWiki > Dev > TestSuite (r1.1 vs. r1.5) TWiki webs:
Dev | Doc | Main | TWiki | Sandbox
Dev . { Changes | Index | Search | Go }
 <<O>>  Difference Topic TestSuite (r1.5 - 02 Aug 2003 - DanielBonniot)
Changed:
<
<

%META:TOPICPARENT{name="NiceProjects"}%

>
>

%META:TOPICPARENT{name="ToolsAndLibraries"}%


 <<O>>  Difference Topic TestSuite (r1.4 - 17 Mar 2003 - DanielBonniot)
Changed:
<
<

PASS and FAIL GLOBAL COMMENT

>
>

PASS and FAIL

Changed:
<
<

Testcases may contain the TOPLEVEL keyword that has the following meaning: everything in front of the TOPLEVEL keyword will be collected in the main() method. Statements behind the TOPLEVEL keyword are global to the package, like global variables, functions or class definitions.

>
>

Known bugs

When a bug is found, I often write a testcase for it first, before the bug is fixed. In that case, it is useful to flag the testcase as being a known bug. This is done by adding bug at the end of the line. I such a testcase is indeed found to include an error, well we knew it and print nothing. But if it works, then print a victorious message! :-). In the summary:
  known bugs: 2
  fixed: 1

Testcases may contain the TOPLEVEL keyword that has the following meaning: everything in front of the TOPLEVEL keyword will be collected in the main() method. Statements behind the TOPLEVEL keyword are global to the package, like global variables, functions, methods and class definitions.

Changed:
<
<

GLOBAL

>
>

Global

Deleted:
<
<

Known bugs

When a bug is found, I often write a testcase for it first, before the bug is fixed. This means that an error is printed each time the testsuite is run, even if I work on something else. Moreover I don't want to commit the new case, because someone else might think they introduced a bug. So what I propose is to have a flag for test cases:/// PASS (BUG) and /// FAIL (BUG)to tell that the testcase is not working yet. This means that if an error if found, well we knew it and print nothing. But if it works, then print a vidtorious message! :-) and the source of the case. In the summary:
  known bugs: 2
  corrected: 1

 <<O>>  Difference Topic TestSuite (r1.3 - 25 Feb 2003 - TWikiGuest)

 <<O>>  Difference Topic TestSuite (r1.2 - 19 Feb 2003 - TWikiGuest)
Changed:
<
<

This is the place where discussions around the nice testsuite can take place

Nice TestSuite Discussion

>We could also use other char sequences to signal the expected failure column
>/*#*/
>/@/
>or some other.
>
>I know you want to have something general in order to use a bunch of
>keyword in the middle of the code.
>My sugegstion would be to take
>/*/// FAIL HERE*/
>then I could scan /*...*/ and filter the keywords out like the others
>beginning with /// that are at the end of the line
>
>Then we would use the same format as in the past, and we would make it
>possible to use the same format for keywords ///, in the well known java
>comments /* and */.
>
>What do you think aboit it?
>
That sounds very reasonable.

>Why should we make a difference between the two type of markers.
>In my opinion both types a keywords used in the context we find them.
>I would appreciate /*/// FAIL HERE*/
>
Jó.

>we could write it simpler:
>  /// TYPE Type1 SUBTYPE OF Type2
>"TYPE" and "SUBTYPE OF" are keywords , so we can leave the second ///
>away. 
>
OK too.

Feature requests

Implementation of /* /// FAIL HERE */ as described above. FAIL HERE has been implemented, so the discussion can be removed, and the doc updated. I saw that a warning is issued when the error is not where expected. Since it is only a warning, it is easy to miss it. I think either it should be made an error, or the number of warnings should be given at the end, together with the number of tests and errors. -- DanielBonniot? - 12 Jul 2002

I think, supporting the number of errors at the end is the best solution -- AlexGreif? - 15 Jul 2002

OK. So it should end with something like:

number of testcases: 100
  succeeded: 97
  warnings :  2
  failed   :  1

I think it is better not to count warnings as succeeded, to stress that there is something to loot at. When the error location is wrong, it would also be nice to print both the expected and the actual location. This would help understanding the problem. I never know which one is printed! :-) -- DanielBonniot?

ok, but it gets complicated when the user expected one failure, but two failures occured in the same line, and none of the two matches with the expected one. Anyway is it possible that the compiler recognizes more than one failure in the same line? -- AlexGreif? - 15 Jul 2002

In theory it should be possible to have two errors on the same line. In the current implementation I think this can seldom happen (unless you write several methods on the same line! :-) But we should not rely on that to stay true forever. I think you could just list all the expected and all the actual locations. In most cases there will be only one anyway (I try to write small independant testcases, instead of longer ones). -- DanielBonniot? - 16 Jul 2002

Warnings that result of wrong FAIL_HERE positions are treated sort of failures. Thus the source code is shown and the compiler results. A new counter has been introduced, that counts the warnings independently of failures. -- AlexGreif? - 29 Jul 2002

Saving output for failed tests When a test fails, it would be useful if the output of the compilation (and the output of the run, with the stack trace of the exception thrown) was saved in a file, in the failure directory.

>
>

This page documents the testsuite engine for Nice. It also contains a list of wanted features that are not implemented yet.

Added:
>
>

TOC: No TOC in "Dev.TestSuite"

Deleted:
<
<

Saving successful tests Sometimes I want to look at the code generated by succesful tests. It would be good to have a -save comment line option to say to save all tests, not just the failing ones.

Known bugs When a bug is found, I often write a testcase for it first, before the bug is fixed. This means that an error is printed each time the testsuite is run, even if I work on something else. Moreover I don't want to commit the new case, because someone else might think they introduced a bug. So what I propose is to have a flag for test cases:/// PASS (BUG) and /// FAIL (BUG)to tell that the testcase is not working yet. This means that if an error if found, well we knew it and print nothing. But if it works, then print a vidtorious message! :-) and the source of the case. In the summary:

  known bugs: 2
  corrected: 1
Well, that's all for now! Quite a lot of new features :-) None is critical, so do what you can when you find the time. But all these features will make testing easier.

BTW, the testsuite is getting more and more powerful. Maybe it is getting time to put a copyright on it. Alex, I think it should be on your name. What about the GPL? :-)) -- DanielBonniot? - 29 Jul 2002

Deleted:
<
<

You mean, changing the headers in the source files? -- AlexGreif?

Changed:
<
<

Yes. The files in n.t.testsuite.* have no copyright header (they have your name as the author). To apply the GPL, see http://www.gnu.org/licenses/gpl.html#SEC4 (it could be similar to the compiler's headers, changing the name :-)

>
>

Nice TestSuite Documentation

Deleted:
<
<

-- DanielBonniot? - 30 Jul 2002

Compiler warnings We need to track compiler warning, not just errors. I think by default a warning should be considered as an error. We could add a ///WARN tag to say that a test should (or can?) emit a warning.

Nice TestSuite Documentation

Added:
>
>

Changed:
<
<

COMMENT

>
>

Comments

Added:
>
>

Specifying the location of failures

In a /// FAIL test, it is often desirable to specify where (which line, which column) the error should be reported. This allows for checking that the compiler gives the good location to the user, and also that the test does not fail "by accident", for an other unrelated reason (for instance a syntax error when writing the test).

This can be done, by embedding a comment of the form /* /// FAIL HERE */ just before the location of the desired error in the test source.

Warnings that result from wrong FAIL HERE positions are treated sort of failures. Thus the source code is shown and the compiler results.

Running the testsuite

To run the whole testsuite, just call make check in the main directory of Nice.

To run a specific subset, you can use:

java -Dassertions=true -classpath classes nice.tools.testsuite.TestNice testsuite/<directory>

If you use Java 1.4 or later, use java -ea instead of java -Dassertions=true (this should also be done in the Makefile, but how do we make it work for both 1.3 and 1.4?).

Feature requests

Saving output for failed tests

When a test fails, it would be useful if the output of the compilation (and the output of the run, with the stack trace of the exception thrown) was saved in a file, in the failure directory.

Saving successful tests

Sometimes I want to look at the code generated by succesful tests. It would be good to have a -save comment line option to say to save all tests, not just the failing ones.

Known bugs

When a bug is found, I often write a testcase for it first, before the bug is fixed. This means that an error is printed each time the testsuite is run, even if I work on something else. Moreover I don't want to commit the new case, because someone else might think they introduced a bug. So what I propose is to have a flag for test cases:/// PASS (BUG) and /// FAIL (BUG)to tell that the testcase is not working yet. This means that if an error if found, well we knew it and print nothing. But if it works, then print a vidtorious message! :-) and the source of the case. In the summary:
  known bugs: 2
  corrected: 1
Well, that's all for now! Quite a lot of new features :-) None is critical, so do what you can when you find the time. But all these features will make testing easier.

Compiler warnings

We need to track compiler warning, not just errors. I think by default a warning should be considered as an error. We could add a ///WARN tag to say that a test should (or can?) emit a warning. (Not urgent: I'm not sure if there is any interesting warning issued by the current compiler).

Failure positions

At the moment, the testsuite engine emits a warning if there was no error where you expected one. It should also warn if there was an error where you did not expect any. Probably, as an exception, a test case that has no explicit FAIL HERE is also OK.

The rationale is to test cases, like the one I just solved, where one error is expected, but the compiler reports two errors.

Macros

It could be useful to define some macros, in the form of /// keywords, for tests that are repeated many times with a similar pattern.

One example is:

/// TYPE Type1 SUBTYPE OF Type2

In the engine, this would be expanded to some Nice source that will compile if and only if Type1 is a subtype of Type2.

This is not urgent. If we really need something like this, perhaps it would be wiser (and more fun!) to transition the testsuite to a set of Nice functions, so that test cases could be written in Nice itself.


 <<O>>  Difference Topic TestSuite (r1.1 - 12 Feb 2003 - TWikiGuest)
Added:
>
>

%META:TOPICINFO{author="guest" date="1045063091" format="1.0" version="1.1"}% %META:TOPICPARENT{name="NiceProjects"}% This is the place where discussions around the nice testsuite can take place

Nice TestSuite Discussion

>We could also use other char sequences to signal the expected failure column
>/*#*/
>/@/
>or some other.
>
>I know you want to have something general in order to use a bunch of
>keyword in the middle of the code.
>My sugegstion would be to take
>/*/// FAIL HERE*/
>then I could scan /*...*/ and filter the keywords out like the others
>beginning with /// that are at the end of the line
>
>Then we would use the same format as in the past, and we would make it
>possible to use the same format for keywords ///, in the well known java
>comments /* and */.
>
>What do you think aboit it?
>
That sounds very reasonable.

>Why should we make a difference between the two type of markers.
>In my opinion both types a keywords used in the context we find them.
>I would appreciate /*/// FAIL HERE*/
>
Jó.

>we could write it simpler:
>  /// TYPE Type1 SUBTYPE OF Type2
>"TYPE" and "SUBTYPE OF" are keywords , so we can leave the second ///
>away. 
>
OK too.

Feature requests

Implementation of /* /// FAIL HERE */ as described above. FAIL HERE has been implemented, so the discussion can be removed, and the doc updated. I saw that a warning is issued when the error is not where expected. Since it is only a warning, it is easy to miss it. I think either it should be made an error, or the number of warnings should be given at the end, together with the number of tests and errors. -- DanielBonniot? - 12 Jul 2002

I think, supporting the number of errors at the end is the best solution -- AlexGreif? - 15 Jul 2002

OK. So it should end with something like:

number of testcases: 100
  succeeded: 97
  warnings :  2
  failed   :  1

I think it is better not to count warnings as succeeded, to stress that there is something to loot at. When the error location is wrong, it would also be nice to print both the expected and the actual location. This would help understanding the problem. I never know which one is printed! :-) -- DanielBonniot?

ok, but it gets complicated when the user expected one failure, but two failures occured in the same line, and none of the two matches with the expected one. Anyway is it possible that the compiler recognizes more than one failure in the same line? -- AlexGreif? - 15 Jul 2002

In theory it should be possible to have two errors on the same line. In the current implementation I think this can seldom happen (unless you write several methods on the same line! :-) But we should not rely on that to stay true forever. I think you could just list all the expected and all the actual locations. In most cases there will be only one anyway (I try to write small independant testcases, instead of longer ones). -- DanielBonniot? - 16 Jul 2002

Warnings that result of wrong FAIL_HERE positions are treated sort of failures. Thus the source code is shown and the compiler results. A new counter has been introduced, that counts the warnings independently of failures. -- AlexGreif? - 29 Jul 2002

Saving output for failed tests When a test fails, it would be useful if the output of the compilation (and the output of the run, with the stack trace of the exception thrown) was saved in a file, in the failure directory.

Saving successful tests Sometimes I want to look at the code generated by succesful tests. It would be good to have a -save comment line option to say to save all tests, not just the failing ones.

Known bugs When a bug is found, I often write a testcase for it first, before the bug is fixed. This means that an error is printed each time the testsuite is run, even if I work on something else. Moreover I don't want to commit the new case, because someone else might think they introduced a bug. So what I propose is to have a flag for test cases:/// PASS (BUG) and /// FAIL (BUG)to tell that the testcase is not working yet. This means that if an error if found, well we knew it and print nothing. But if it works, then print a vidtorious message! :-) and the source of the case. In the summary:

  known bugs: 2
  corrected: 1
Well, that's all for now! Quite a lot of new features :-) None is critical, so do what you can when you find the time. But all these features will make testing easier.

BTW, the testsuite is getting more and more powerful. Maybe it is getting time to put a copyright on it. Alex, I think it should be on your name. What about the GPL? :-)) -- DanielBonniot? - 29 Jul 2002

You mean, changing the headers in the source files? -- AlexGreif?

Yes. The files in n.t.testsuite.* have no copyright header (they have your name as the author). To apply the GPL, see http://www.gnu.org/licenses/gpl.html#SEC4 (it could be similar to the compiler's headers, changing the name :-)

-- DanielBonniot? - 30 Jul 2002

Compiler warnings We need to track compiler warning, not just errors. I think by default a warning should be considered as an error. We could add a ///WARN tag to say that a test should (or can?) emit a warning.

Nice TestSuite Documentation The Nice TestSuite is a framework to test the Nice compiler. For this aim, a proprietary file format for the testcases has been created. Here are some testcase examples:

/// COMMENT This testsuite tests the following ....
/// PASS
  int a = 1;

/// PASS
  /// TOPLEVEL
  /* A simple global var. */
  var int x = 0;


/// FAIL
  /// COMMENT this should fail
  int a = "";

/// PASS
  /// package a
  /// TOPLEVEL
  class A
  {
    int x = f();
  }
  int f() = 0;
  /// package b import a
  ;
TestCase? keywords are all prefixed with ///. On root level (without any whitespace indentation) currently following keywords are allowed

PASS and FAIL GLOBAL COMMENT The keywords PASS and FAIL indicate a testcase on its own, and whether the expected behaviour while compilation is pass or fail. Testcases cannot reference each other, they are all compiled and tested independently.

Testcases may contain the TOPLEVEL keyword that has the following meaning: everything in front of the TOPLEVEL keyword will be collected in the main() method. Statements behind the TOPLEVEL keyword are global to the package, like global variables, functions or class definitions.

PACKAGE The PACKAGE keyword indicates which package the sources should belong to. More than one PACKAGE keyword can be declared to build a testcase with a certain package structure. Packages can be imported by the PACKAGE some IMPORT other keywords.

GLOBAL After the GLOBAL keyword source code can be written that is accessable by all following testcases. Global sources can be placed at where (before, after or between testcases) in the testsuite file. The scope of global sources is restricted to the same testsuite file and to testcases that are behind the global source. Globals can appear more than one times in the file as the next example demonstrates

/// GLOBAL
  int g1 = 1
   
/// PASS
  /* access to g1 only */
  int a = 1;

/// GLOBAL
  int g2 = 2

/// PASS
  /* access to g1 and g2 */
  /// TOPLEVEL
  /* A simple global var. */
  var int x = 0;
Because globals are accessible only in the same file, placing global sources at the end of the file makes no sense.

COMMENT Comments can be included, and if desired outputted, at any place in a testsuite. Comments have the following syntax /// COMMENT place your comments here

Comments are one-liners, if you want to write a long comment, write the whole comment in one line as described above or break it into more COMMENT keywords, each in a new line. The following example shows possible ways to comment

/// COMMENT This testsuite describes ....
/// GLOBAL
    ...
    
/// PASS
    /// COMMENT This testcase demonstartes ...
    /// COMMENT ... line two of testcase comment ...
    ...
    
/// COMMENT here we could comment the end of the testsuite :)
Activating comments in the output can be done by specifying the -comment flag in the command line

java -classpath /usr/share/java/nice.jar nice.tools.testsuite.TestNice ~/testsuite/ -comment or java -classpath /usr/share/java/nice.jar nice.tools.testsuite.TestNice -comment ~/testsuite/

While running the testcases temporary files and folders are created in "testsuite-temp-folder". Testcases that fail, in the sense of that they don't behave like expected (pass or fail) are collected in the folder "testsuite-fail-folder" for later investigation purposes. In this folder, each testcase is grouped in its own folder that is numbered from "1" on in ascending order. At the next testsuite run both folders are initially cleared. -- AlexGreif? - 27 Jun 2002


Topic TestSuite . { View | Diffs | r1.5 | > | r1.4 | > | r1.3 | More }
Revision r1.1 - 12 Feb 2003 - 15:18 GMT - TWikiGuest
Revision r1.5 - 02 Aug 2003 - 15:40 GMT - DanielBonniot
Copyright © 1999-2003 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback.