WO2017130087A1 - Methods and systems for isolating software components - Google Patents

Methods and systems for isolating software components Download PDF

Info

Publication number
WO2017130087A1
WO2017130087A1 PCT/IB2017/050323 IB2017050323W WO2017130087A1 WO 2017130087 A1 WO2017130087 A1 WO 2017130087A1 IB 2017050323 W IB2017050323 W IB 2017050323W WO 2017130087 A1 WO2017130087 A1 WO 2017130087A1
Authority
WO
WIPO (PCT)
Prior art keywords
code
fake
processors
cause
testing
Prior art date
Application number
PCT/IB2017/050323
Other languages
French (fr)
Inventor
Eli Lopian
Original Assignee
Typemock Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/005,145 external-priority patent/US10078574B2/en
Application filed by Typemock Ltd. filed Critical Typemock Ltd.
Publication of WO2017130087A1 publication Critical patent/WO2017130087A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites

Definitions

  • the present invention relates generally to validating software. BACKGROUND OF THE INVENTION
  • Validating software is a complex problem that grows exponentially as the complexity of the software grows. Even a small mistake in the software can cause a large financial cost. In order to cut down on these costs, software companies test each software component as they are developed or during interim stages of development.
  • Certain embodiments of the present invention disclose a method that enables isolating software components, without changing the production code. Testing isolated software components gives better testing results as the coverage of the tests is much higher and the complexity does not grow exponentially. This is a basic requirement for validating a software component. In order to isolate the components, there is a need to design the program that utilizes the software components in such a way that the components can be changed. This is part of a pattern called Inversion of Control or Dependency Injection. For example when validating that software behaves correctly on the 29th of February, there is a need to change the computer system's date before running the test. This is not always possible (due to security means) or wanted (it may disturb other applications).
  • the method used today to verify this is by wrapping the system call to get the current date with a new class.
  • This class may have the ability to return a fake date when required. This may allow injecting the fake date into the code being tested for, and enable validating the code under the required conditions.
  • isolating the code base and injecting fake data are required.
  • isolating the code base and injecting fake data are required.
  • a mock framework 110 may dynamically create a fake object that implements the same interface of the real object (the same interface that is created using the Abstract Factory), and has the ability to define the behavior of the object and to validate the arguments passed to the object.
  • Any suitable processor, display and input means may be used to process, display, store and accept information, including computer programs, in accordance with some or all of the teachings of the present invention, such as but not limited to a conventional personal computer processor, workstation or other programmable device or computer or electronic computing device, either general- purpose or specifically constructed, for processing; a display screen and/or printer and/or speaker for displaying; machine-readable memory such as optical disks, CDROMs, magnetic-optical discs or other discs; RAMs, ROMs, EPROMs, EEPROMs, magnetic or optical or other cards, for storing, and keyboard or mouse for accepting.
  • the term "process” as used above is intended to include any type of computation or manipulation or transformation of data represented as physical, e.g. electronic, phenomena which may occur or reside e.g. within registers and/or memories of a computer.
  • the above devices may communicate via any conventional wired or wireless digital communication means, e.g. via a wired or cellular telephone network or a computer network such as the Internet.
  • the apparatus of the present invention may include, according to certain embodiments of the invention, machine readable memory containing or otherwise storing a program of instructions which, when executed by the machine, implements some or all of the apparatus, methods, features and functionalities of the invention shown and described herein.
  • the apparatus of the present invention may include, according to certain embodiments of the invention, a program as above which may be written in any conventional programming language, and optionally a machine for executing the program such as but not limited to a general purpose computer which may optionally be configured or activated in accordance with the teachings of the present invention.
  • Fig. 1 is a simplified functional block diagram of a software isolation system constructed and operative in accordance with certain embodiments of the present invention
  • Fig. 2 is an example of a decision table for .NET code, constructed and operative in accordance with certain embodiments of the present invention
  • Fig. 3 is a simplified flowchart illustration for the weaver of Fig. 1, constructed and operative in accordance with certain embodiments of the present invention
  • Fig. 4 is a simplified functional block diagram of a profile linker and associated components, constructed and operative in accordance with certain embodiments of the present invention
  • Fig. 5 is a simplified functional block diagram of the mock framework of Fig. 5 and associated components, all constructed and operative in accordance with certain embodiments of the present invention
  • Fig. 6 is a simplified flow diagram of expectations used by the expectation manager of
  • Fig. 7 is a simplified flow diagram of a natural mock setting embodiment of the present invention
  • Fig. 8 is a simplified flow diagram of a mocked method flow which may be performed by the apparatus of Fig. 1, in accordance with certain embodiments of the present invention.
  • Fig. 9 is a simplified diagram of a method by which the mock framework of Fig. 1 sends messages to the tracer of Fig. 1 , all in accordance with certain embodiments of the present invention.
  • Fig. 1 is a simplified functional block diagram of a software isolation system constructed and operative in accordance with certain embodiments of the present invention.
  • the run time system 102 is the system that actually runs the code and the tests; this could be an operating system, a scripting system or a virtual machine (as in Java or .NET).
  • the weaver 104 is responsible for inserting the added hooking code into the production code base 106. In each method of the production code the weaver 104 may insert a small piece of code 107 that calls the Mock framework 110 which then decides whether to call the original code or to fake the call.
  • the inserted code 107 can also modify the arguments passed to the production method if required. This is handy for arguments passed by reference.
  • the production code base 106 is the code that is to be isolated. There is no need to change the design of this code just to isolate the code.
  • the test code 108 calls the Mock framework 110 in order to change the behavior of the production code. Here the test can set up what to fake, how to validate the arguments that are passed, what to return instead of the original code and when to fail the test.
  • the mock framework 110 is responsible for creating mock objects dynamically and for managing the expectations and behavior of all fake calls.
  • the tracer 112 is typically used to debug and graphically display the methods that are mocked. It is typically used to analyze the faked and original calls of the production code.
  • the configurator 114 is used to set the options of the tool.
  • code 107 into production code 106
  • ways in which it is possible to insert code 107 into production code 106 such as but not limited to the following:
  • Each method has its pros and cons.
  • the main decision factors are Ease of implementation and Manual vs Automatic as selected by the user.
  • Fig. 2 is an example of a decision table for .NET code.
  • the method that was chosen was the Profiler API (Fig 3).
  • Fig 4 In order to solve the issues with the code coverage tool, a Profiler Linker was created. (Fig 4)
  • the Weaver 104 registers to the .NET Runtime (CLR) 102 and typically just before the JIT Compiler is run to create machine code 304 from the Byte code 302, instructions pertaining to the added hooking code are inserted as indicated at reference numeral 308.
  • the Weaver 104 typically analyses the signature of the method in order to understand the parameters passed and the return value. This enables writing code that may call the Mock framework 110 to check if the method needs to be faked, and to pass the arguments to the Framework 110 for validating.
  • the code also changes the values of the parameters if required. This is useful for parameters that are passed by reference and for swapping the values for the test (e.g. it is possible to change a filename that is passed as a parameter to point to a dummy file required for the test).
  • the weaver 104 is actually a framework that can be used to insert any new code into a code base.
  • the weaver 104 has to change the metadata and add information that points to the correct Mock framework 110. This is typically done by putting the framework 110 in a known directory (GAC) and by parsing the assembly (dll file) to extract relevant information (version and signing signature). Some information is passed from the Mock framework 110 to the Weaver 104, this is typically done using environment variables, although there are other methods available to do this. According to certain embodiments of the present invention, one, some or all of the following may hold:
  • the weaver 104 must run well in debug mode too and thus it is required to fix the debug to code mapping to ignore the code that is added. Try catch handlers must also be updated to point to the correct positions in the code after the code has been added.
  • the weaver 104 must take into consideration small and large method headers and event handlers.
  • Signed assemblies can only call other signed assemblies so the Mock framework 110 is signed.
  • Weaver 104 In order to support multiple .NET versions the same Weaver 104 is used and has instructions that enable it to use features of the later version only when that version is available.
  • the Mock framework 110 assembly should not be weaved as this may lead to a recursive infinite loop.
  • Another method to isolate code and to insert fake objects is by changing the metadata tables.
  • Each call to a method is defined as 'call ⁇ entry in method table>'.
  • Each entry in the method table has the name of the method its type (which is actually an ⁇ entry in the type table>) and other information.
  • Each entry in the type table has the name of the type and the assembly that it is defined in (which is an ⁇ entry in the assembly table>).
  • a profile linker In order to support profiling and code coverage tools that may be required to run together with the tests, a profile linker may be employed.
  • the profile linker 401 loads one or more profile assemblies (COM objects that are suitable to be a profiler) and then calls each profiler sequentially and weaves code from both the assemblies.
  • the profiler linker 401 takes care of profilers from different versions and manages to make sure that the profilers work correctly.
  • the testing system may detour a createprocess system api (or any other similar api). This is the api that tells the operating system to start a new process. The testing application may then check if the new process requires mocking or linking. It can do so by looking at the executable name and at the environment variables passed. The system may change/add to these variables and/or may call the linker profiler and/or signal the 2 or more profilers to run together.
  • the mock framework 110 is in charge of managing the expectations. This framework is linked by the test code 108, and expectations are recorded using the frameworks API.
  • the mock framework 110 typically comprises an Expectation Manager 550, a Natural Mock Recorder 520, a Dynamic Mock Builder 530, an Argument Validation 540, a Run Time Engine 510 and a Verifier 560.
  • the Expectation Manager 550 is a module used to manage the expectations for the fake code.
  • the expectations may be kept in the following way, which is not the only way do to this, but it has its advantages.
  • the Framework 110 holds a map of type expectations 620 that are indexed via the type name. Each Type Expectation is connected to a list of Instance Expectations 630 indexed by the instance and another reference to an Instance Expectation that represents the expectations for all instances.
  • All Instance Expectations of the same type reference an Instance Expectation that manages the expectations for all static methods. This is because static methods have no instance.
  • Each Instance Expectation contains a map of Method Expectations 660 that is indexed via the method name. Each method may have the following four lists as shown in Fig. 6:
  • the Method Expectation 660 may first check for a conditional value then a default conditional value, then a regular value and finally the default value.
  • the Null Return Value 680 and Null Instance Expectation 640 are classes that are part of the Null Object pattern. This leads to faster code while running, as there is no need to check if references to Return Value or Instance Expectation are null.
  • Expectations of Generic types are managed each in its own Type Expectation class with the generic parameters as a key, although the non generic Type Expectation points to the generic one.
  • Expectations of Generic methods are managed each in its own Method Expectation class with the generic parameters as a key, although the non generic Method Expectation points to the generic one.
  • Reflective mocks use strings names of the methods that are to be mocked.
  • the Framework analyzes the tested assembly, searches for the method and checks that it exists and has the correct return value. The method is then added to the expectations of that method.
  • the test code 108 can then change the behavior of the code and registers what that method should do and how many times. The method may be instructed to return a fake result, throw an exception, or call the original code. The framework may also be instructed to always fake a method (this is the default return), or to fake the next call or number of calls (managed by the Return Value Stack).
  • the Framework can be directed to mock all instances, a future instance or to create the mocked instance so that it can be passed to the production code 106 (this may be managed by the Type Expectation).
  • Methods can also have conditional expectations. Conditional expectations may fake calls only if the arguments passed are the same as those expected.
  • the framework allows expectations to be canceled and changed before the actual code is called.
  • Natural Mocks use the actual calls to the methods that are to be mocked.
  • the Framework may be called by these calls (because all the methods are already weaved) and the framework may record that the call is expected, and add it to the list of expectations.
  • the framework allows setting the behavior in the same way as Reflective Mocks. Chained calls are also supported using Natural Mocks. This allows a chain of calls to be mocked in one statement.
  • the Framework may build the return object of one statement in the chain as an input for the next statement in the chain.
  • the framework has to differentiate between creating Dynamic Mocks for incomplete types and real objects with dummy constructor arguments for complete or static objects.
  • Natural Mocks is easier than Reflective Mocks and they are supported by IDE editors that allow code completion and automatic re-factoring, but these cannot account for all cases.
  • Re-Factoring is the process of restructuring code without changing its behavior. There are development tools that help to automate this task. When a method cannot be called from the code (for example if its scope is private), Reflective Mocks must be used. Although Reflective Mocks have the advantage of covering all scopes of the methods, they are more prone to mistakes as the methods are passed as a string.
  • Fig. 7 is a data flow diagram showing a Natural Mock Setting Expectations Flow according to an embodiment of the present invention.
  • the Dynamic Mock Builder 530 is used to create new objects in a dynamic assembly. This creates real obj ects out of incomplete classes (with abstract methods or interfaces). These objects can then be used and passed to the production code, so that when methods are called the Run Time Engine 510 may return fake results to the created methods. These objects are built using the standard Reflection library.
  • the Argument Validation 540 is responsible for verifying that the arguments passed are those that were expected. This is done using a hook that actually does the validation.
  • the Arguments passed and those expected are sent to a validation method that checks different attributes of the object.
  • the attributes which may be of virtually unlimited scope, may, for example, indicate that the objects are the same or that the .Equals() method is true.
  • the framework 110 has a predefined group of argument validators including string comparisons, Group and Sets comparisons, which verify that the object is being faked by the framework.
  • the test code 108 can register a customized validator if this is required.
  • the run time engine 510 is called from the code weaved into the production code.
  • the Run Time engine 510 checks to see if the specific type, instance and method should be faked. If they are, the code may validate the arguments and return the fake return value.
  • the Run Time Engine 510 checks the arguments to see if a conditional expectation should be used. The engine also calls the argument validation, and when the arguments are not valid the engine may throw an expectation. There are cases where throwing the expectation is not enough and, when configured correctly, these validation errors may appear at the verifying stage too.
  • Performance is an issue for the Run Time engine 510 as it is run for every method called.
  • One way to solve this is to check if the method is faked; this returns quickly if no mocks have been created or if the type is not mocked. Only after knowing that the method is mocked, are the arguments passed and validated, since passing the argument can take time as they are all required to be encapsulated within an object.
  • the Run Time Engine 510 passes each call to the Natural Mock Recorder.
  • FIG. 8 A flow diagram of the Mocked Method Flow described herein is shown in Fig. 8.
  • the Engine 510 may employ the type, method, instance and type generic and method generic parameters. The last two are for generic specific code only and with them it is possible to map the correct expectations.
  • the engine receives this information from the weaver 104 that analyzed the metadata of the code.
  • the Run Time Engine 510 checks if expectations contain mocks for the new instance. This way the Engine can manage mocking objects that are created after the expectations are set (Future Objects).
  • a static constructor is called once for each type.
  • the Run Time Engine 510 remembers that this was mocked. Then when a method of that type is called and the type is not mocked any more, the static constructor may be called. This ensures that mocking the static constructor in one test will not affect another test.
  • the verifier is called at the end of the test and throws errors when not all the expected calls are made or when an argument validator fails.
  • the verifier can wait till all expected mocks are completed. This is a feature that helps test multithreaded code, where the tested code runs asynchronically in another thread.
  • the framework must run in all .NET versions and uses reflection methods to call the newer version API from the old version.
  • Re the Production code base 106 nothing has to change here.
  • the test code 108 calls the Mock Framework API in order to change the behavior of the production code.
  • the tracer 112 is used to debug and graphically display the methods that are mocked. It is used to analyze the faked and original calls of the production code. Mocking of future objects can be a bit confusing, and the tracer 1 12 helps track these issues.
  • Fig. 9 show the Mock framework 110 sending messages to the tracer 1 12 process.
  • the configurator 114 is used to configure the behavior of the Framework 110. Using the Configurator 114 it is possible to link a code coverage tool with the mock framework 1 10. This may be done by changing the registry key of the coverage tool to point to the Profile Linker 401. The Linker 401 then loads both the coverage tool and the mock framework 1 10.
  • Advantages of certain embodiments of the present invention include that it is much easier to verify the code base of an application. There is no need to perform pre-compile steps, or to create special designed code to be able to isolate the code in order for the code to be testable. For example, suppose a developer had the following production code: Dogs.GetDog("rusty"). Tail. Wag().Speed(5);
  • test code 108 would look like this (the production code changes are not shown):
  • FakeDoglnternet fakelnternet new FakeDogInternet()
  • FakeDog fakeDog new FakeDogO
  • FakeTail fakeTail new FakeTail()
  • FakeWagger fakeWagger new FakeWagger()
  • FakeWagger The following public method may be in the production code: Dogs . S etlnternet( ) .
  • the weaver 104 may add code to the ByteCode that mimics the following code may be added to the original code, it being emphasized that the weaver 104 adds code to directly to the ByteCode, the original code being unaffected.
  • the equivalent high level language is shown for clarity:
  • the stack may be used to keep the mockReturn object instead of a local variable. This saves the weaver 104 from defining the variable in the metadata. This helps to test the code. Now that this is in place it is possible to test that the code that counts the number of days in the current month works for also leap years. Following is an example of one test, showing the code to be tested:
  • Verifying Calls The mechanism can be used to test that a certain call was actually made. In the previous test DateTime.Now might never even be called. As the Mock framework 110 counts the calls made, it can now be verified that the expected calls were actually made.
  • the Weaved code 107 may be:
  • ⁇ the Weaved code 107 may be:
  • the Original Code may be:
  • the Weaved code 107 may include:
  • Type typeGenerics new Type[] ⁇ ClassType ⁇ ;
  • Type methodGenerics new Type[] ⁇ MethodType ⁇ ;
  • a new method may be added in the metadata that points to the original method (eg, for CreateProcess add a new method »CreateProcess.).
  • "»" may be used in the name, as it is legal for il but not for higher level languages.
  • the original line may then be changed to point to a new allocated piece of code that simply calls the new method just defined. In this manner, all calls to the plnvoke method will now be directed to the new method.
  • the new method can now be faked as described herein in relation to normal methods.
  • a static constructor When a static constructor is called, it may be saved. Subsequently, when a clean up of the system between tests is requested/desired, all the static constructors may be re- invoked. For loaded types that don't have static constructors - all the static fields may be reset. In order to make sure that all types have been identified - all types loaded in the system may be run. According to further embodiments, it is possible to tell if a type has been loaded by:
  • stale mocks After an instance of a given method or function is mocked, it may be left in a bad/improper state and, therefore, should not be used in the system after the test. Such methods or functions may be referred to as "stale mocks". To ensure that stale mocks are not re-used by the system, a list of used mock instances may be stored. Whenever an instance is used it may be tested against that list, and fail if it is a stale mock, i.e. on the list.
  • This invention discloses a method that enables isolating software components, without changing the production code.
  • Validating software is a complex problem that grows exponentially as the complexity of the software grows. Even a small mistake in the software can cause a large financial cost.
  • Fake components that are difficult to set up (send e-mail, ftp)
  • Other cases will require a more complex solution.
  • faking a complete set of API's for example: faking sending an email
  • the code will now have to support creating and calling two different components.
  • One way to do this is to use the Abstract Factory Pattern. Using this pattern, the production code should never create the object (that needs to be faked for tests). Instead of creating the object, the Factory is asked to create the object, and the code calls the methods of the object that the factory created. The factory can then choose what object to create: a real one or a fake one. This requires using an interface that both clients (real and fake) need to implement. It also requires creating a complex mechanism that will allow the factory to choose what object to create and how to do so. This is done mainly through configuration files although it can be done in code too.
  • a mock framework will dynamically create a fake object that implements the same interface of the real obj ect (the same interface that we created using the Abstract Factory), and has the ability to define the behavior of the object and to validate the arguments passed to the object.
  • the invention adds code that is inserted or weaved into the production code base that is being tested.
  • the added code will enable hooking fake or mock objects into our production code by calling the Mock framework.
  • This framework can decide to return a fake object.
  • the framework will also be able to validate and change the arguments passed into the method.
  • Fig. 1 shows a Software Isolation System Run Time System
  • the run time system is the system that actually runs the code and the tests; this could be an operating system, a scripting system or a virtual machine (as in Java or .NET)
  • the weaver is responsible for inserting the added hooking code into the production code base. In each method of the production code the weaver inserts a small piece of code that calls the Mock Framework which then decides whether to call the original code or to fake the call. The inserted code can also modify the arguments passed to the production method if required. This is handy for arguments passed by reference.
  • the production code base is the code that we want to isolate. There is no need to change the design of this code just to isolate the code.
  • the test code calls the Mock Framework in order to change the behavior of the production code.
  • the test can set up what to fake, how to validate the arguments that are passed, what to return instead of the original code and when to fail the test.
  • the mock framework is responsible for creating mock objects dynamically and for managing the expectations and behavior of all fake calls.
  • the tracer is used to debug and graphically display the methods that are mocked. It is used to analyze the faked and original calls of the production code.
  • the configurator is used to set the options of the tool.
  • the weaver is the heart of the solution. There are several ways in which it is possible to insert code into production code. Here are some options.
  • the method that was chosen was the Profiler API.
  • a Profiler Linker was created.
  • CLR .NET Runtime
  • the Weaver analyses the signature of the method in order to understand the parameters passed and the return value. This enables writing code that will call the Mock Framework to check if the method needs to be faked, and to pass the arguments to the Framework for validating.
  • the code also changes the values of the parameters if required. This is useful for parameters that are passed by reference and for swapping the values for the test (e.g. it is possible to change a filename that is passed as a parameter to point to a dummy file required for the test).
  • the weaver has to change the metadata and add information that points to the correct Mock Framework. This is done by putting the framework in a known directory (GAC) and by parsing the assembly (dll file) to extract relevant information (version and signing signature). Some information is passed from the Mock Framework to the Weaver, this is done using environment variables, although there are other methods available to do this.
  • GAC known directory
  • the weaver must run well in debug mode too and thus we have to fix the debug to code mapping to ignore the code we added.
  • the weaver must take into consideration small and large method headers and event handlers.
  • the Mock Framework assembly should not be weaved as this will lead to a recursive infinite loop.
  • the weaver is actually a framework that can be used to insert any new code into a code base.
  • Fig. 3 shows a Weaver Flow Chart
  • Each call to a method is defined as 'call ⁇ entry in method table>'.
  • Each entry in the method table has the name of the method its type (which is actually an ⁇ entry in the type table>) and other information.
  • Each entry in the type table has the name of the type and the assembly that it is defined in (which is an ⁇ entry in the assembly table>).
  • a profile linker In order to support profiling and code coverage tools that are required to run together with the tests, a profile linker is required.
  • the profile linker loads one or more profile assemblies (COM objects that are suitable to be a profiler) and then calls each profiler sequentially and weaves code from both the assemblies.
  • Fig. 4 shows a Profile Linker Diagram
  • the profiler linker takes care of profilers from different versions and manages to make sure that the profilers work correctly.
  • the linker changes the code of both assemblies.
  • the mock framework is in charge of managing the expectations. This framework is linked by the test code, and expectations are recorded using the frameworks API. There are a few parts to the mocking framework.
  • Fig. 5 shows a Mock Framework Expectation Manager
  • This module is used to m the expectations for the fake code.
  • the expectations are kept in the following way, this is not the only way do to this, but it has its advantages.
  • the Framework holds a map of type expectations that are indexed via the type name.
  • Each Type Expectation is connected to a list of Instance Expectations indexed by the instance and another reference to an Instance Expectation that represents the expectations for all instances.
  • Fig. 6 shows an Expectation Manager
  • Each Instance Expectation contains a map of Method Expectations that is indexed via the method name.
  • Each method has 4 lists.
  • the Method Expectation first checks for a conditional value then a default conditional value, then a regular value and finally the default value.
  • the Null Return Value and Null Instance Expectation are classes that are part of the Null Object pattern. This leads to faster code while running, as there is no need to check if references to Return Value or Instance Expectation are null.
  • Expectations of Generic types are managed each in its own Type Expectation class with the generic parameters as a key, although the non generic Type Expectation points to the generic one.
  • Expectations of Generic methods are managed each in its own Method Expectation class with the generic parameters as a key, although the non generic Method Expectation points to the generic one.
  • Reflective mocks use strings names of the methods that are to be mocked.
  • the Framework analyses the tested assembly and searches for the method and checks that it exists and has the correct return value. The method is then added to the expectations of that method.
  • test code can then change the behavior of the code and registers what that method should do and how many times.
  • the Framework can be directed to mock all instances, a future instance or to create the mocked instance so that we can pass it to the test code (this is managed by the Type Expectation).
  • Conditional expectations will fake calls only if the arguments passed are the same as those expected.
  • Natural Mocks use the actual calls to the methods that are to be mocked.
  • the Framework will be called by these calls (because all the methods are already weaved) and the framework will record that the call will be expected, and add it to the list of expectations.
  • the framework allows setting the behavior in the same way as Reflective Mocks.
  • Chained calls are also supported using Natural Mocks. This allows a chain of calls to be mocked in one statement.
  • the Framework will build the return object of one statement in the chain as an input for the next statement in the chain.
  • the framework has to differentiate between creating Dynamic Mocks for incomplete types and real objects with dummy constructor arguments for complete or static objects
  • Reflective Mocks When a method cannot be called from the code (for example its scope is private), Reflective Mocks must be used. Although Reflective Mocks have the advantage of covering all scopes of the methods, they are more prone to mistakes as the methods are passed as a string.
  • Fig. 7 shows a Natural Mock Setting Expectations Flow Dynamic Mock Builder
  • the Dynamic Mock Builder is used to create new objects in a dynamic assembly. This creates real objects out of incomplete classes (with abstract methods or interfaces). These objects can then be used and passed to the production code, so that when methods are called the Run Time Engine will return fake results to the created methods.
  • the Argument Validation is responsible for verifying that the arguments passed are what we expected. This is done using a hook that actually does the validation.
  • the Arguments passed and those expected are sent to a validation method that checks different attributes of the object.
  • the attributes which may be of virtually unlimited scope, may, for example, indicate that the objects are the same or that the .Equals() method is true.
  • the framework has a predefined group of argument validators including string comparisons, Group and Sets comparisons and even verifying that the object is being faked by the framework.
  • the test code can register a customized validator if this is required.
  • the framework also allows setting arguments of the mocked methods. This actually changes the values of the arguments before the actual code is called. This is useful for arguments that are passed by reference, so we can change their values before they are returned and fake [out] arguments.
  • the run time engine is called from the code weaved into the production code.
  • the Run Time engine checks to see if the specific type, instance and method should be faked. If they are, the code will validate the arguments and return the fake return value.
  • the Run Time Engine checks the arguments to see if a conditional expectation should be used.
  • the engine also calls the argument validation, and when the arguments are not valid the engine will throw an expectation. There are cases where throwing the expectation is not enough and, when configured correctly, these validation errors will appear at the verifying stage too.
  • Performance is an issue for the Run Time engine as it is run for EVERY method called.
  • One way to solve this is to check if the method is faked; this returns quickly if no mocks have been created or if the type is not mocked. Only after knowing that the method is mocked, are the arguments passed and validated, since passing the argument can take time as they are all required to be encapsulated within an object.
  • Fig. 8 shows a Mocked Method Flow
  • the Engine In order for the runtime engine to map the called code to the correct mock expectation the Engine requires the type, method, instance and type generic and method generic parameters. The last two are for generic specific code only and with them it is possible to map the correct expectations.
  • the engine receives this information from the weaver that analyzed the metadata of the code.
  • the Run Time Engine checks if expectations contain mocks for the new instance. This way the Engine can manage mocking objects that are created after the expectations are set (Future Objects).
  • a static constructor is called once for each type.
  • the Run Time Engine remembers that this was mocked. Then when a method of that type is called and the type is not mocked any more, the static constructor will be called. This makes sure that mocking the static constructor in one test won't affect another test.
  • the verifier is called at the end of the test and throws errors when not all the expected calls are made or when an argument validator failed.
  • the verifier can wait till all expected mocks are completed. This is a feature that helps test multi-threaded code, where the tested code runs asynchronically in another thread.
  • the framework must run in all .NET versions and uses reflection methods to call the newer version API from the old version.
  • the test code calls the Mock Framework API in order to change the behavior of the production code.
  • the tracer is used to debug and graphically display the methods that are mocked. It is used to analyze the faked and original calls of the production code. Mocking of future objects can be a bit confusing, and the tracer helps track these issues.
  • Fig. 9 shows a Mock framework sending messages to the tracer process.
  • the configurator is used to configure the behavior of the Framework. Using the Configurator it is possible to link a code coverage tool with the mock framework. This is done by changing the registry key of the coverage tool to point to the Profile Linker. The Linker then loads both the coverage tool and the mock framework.
  • test code will look like this (the production code changes are not shown):
  • Fa KeDoqlivi.erri t fake Internet aase Pak Doglnt&rnet ( ) ;
  • FakeDog fakeDog n w Fak&DogO ;
  • FakeWagger fakeWagger FakeWagger () ;
  • fakeTail ExpectCall ( "Wag” ) . Return ( fakeWagger) ;
  • Iabell call System: :get_DateTicks ()
  • days_in_month ⁇ 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 ⁇ ;
  • the mechanism can be used to test that a certain call was actually made. In the previous test we might never even call DateTime.Now. As the Mock Framework counts the calls made, we can now verify that the expected calls where actually made.
  • Type typeGenerics new Type [ ] ⁇ ClassType ⁇ ;

Abstract

A software testing system operative to test a software application comprising a plurality of software components, at least some of which are highly coupled hence unable to support a dependency injection, each software component operative to perform a function, the system comprising apparatus for at least partially isolating, from within the software application, at least one highly coupled software component which performs a given function, and apparatus for testing at least the at least partially isolated highly coupled software component.

Description

METHODS AND SYSTEMS FOR ISOLATING SOFTWARE
COMPONENTS
FIELD OF THE INVENTION
[001] The present invention relates generally to validating software. BACKGROUND OF THE INVENTION
[002] Conventional Internet sources state that "Dependency Injection describes the situation where one object uses a second object to provide a particular capacity. For example, being passed a database connection as an argument to the constructor instead of creating one internally. The term "Dependency injection" is a misnomer, since it is not a dependency that is injected, rather it is a provider of some capability or resource that is injected."
[003] Validating software is a complex problem that grows exponentially as the complexity of the software grows. Even a small mistake in the software can cause a large financial cost. In order to cut down on these costs, software companies test each software component as they are developed or during interim stages of development.
[004] The disclosures of all publications and patent documents mentioned in the specification, and of the publications and patent documents cited therein directly or indirectly, are hereby incorporated by reference. SUMMARY OF THE INVENTION
[005] Certain embodiments of the present invention disclose a method that enables isolating software components, without changing the production code. Testing isolated software components gives better testing results as the coverage of the tests is much higher and the complexity does not grow exponentially. This is a basic requirement for validating a software component. In order to isolate the components, there is a need to design the program that utilizes the software components in such a way that the components can be changed. This is part of a pattern called Inversion of Control or Dependency Injection. For example when validating that software behaves correctly on the 29th of February, there is a need to change the computer system's date before running the test. This is not always possible (due to security means) or wanted (it may disturb other applications). The method used today to verify this is by wrapping the system call to get the current date with a new class. This class may have the ability to return a fake date when required. This may allow injecting the fake date into the code being tested for, and enable validating the code under the required conditions. There are many cases where isolating the code base and injecting fake data are required. Here are a few examples:
Fake a behavior that is scarce. (Dates, Out of Memory)
Fake slow running components. (Database, Internet)
Fake components that are difficult to set up (send e-mail, ftp)
[006] Other cases may require a more complex solution. When faking a complete set of API's (for example: faking sending an email) there is a need to build a framework that enables isolating the complete API set. This means that the code may now have to support creating and calling two different components. One way to do this is to use the Abstract Factory Pattern. Using this pattern, the production code should never create the object (that needs to be faked for tests). Instead of creating the object, the Factory is asked to create the object, and the code calls the methods of the object that the factory created. The factory can then choose what object to create: a real one or a fake one. This requires using an interface that both clients (real and fake) need to implement. It also requires creating a complex mechanism that may allow the factory to choose what object to create and how to do so. This is done mainly through configuration files although it can be done in code too.
[007] When testing using fake objects, it is important to validate the arguments passed to the fake object. In this way it is possible to validate that an e- mail that is supposed to be sent has the correct subject and address. The e-mail, of course, is not actually sent. There is no need to validate that component again, as the e-mail tests are done in isolation for the e-mail object.
[008] It is possible to write the fake object and methods by hand or to use a mock framework 110. A mock framework 110 may dynamically create a fake object that implements the same interface of the real object (the same interface that is created using the Abstract Factory), and has the ability to define the behavior of the object and to validate the arguments passed to the object.
[009] Although these methods work and enable testing the code base, they also require that the code is designed to be testable. This cannot always be done, as sometimes the code is a legacy code, and should remain as such. Legacy code refers to any code that was not designed to allow insertions of fake objects. It would be too costly to rewrite them, as this may lead to an increase in development time just to make the code testable.
[0010] The more complex the code the harder it is to maintain. Designing the code to be testable, puts constraints into the design that are not always compatible with the production design. For example, the code may be required to implement hooks that enable changing the actual object to a fake one. This hook can lead to misuse and hard-to-debug code, as it is intended for testing but it is in the production code.
[0011] It would be easier to test such code if there were no need to change the design for testability, but it should be able to isolate and fake the code required to validate such code. [0012] For example, it would be easier if the system could be programmed to fake the real e-mail object. There would then be no need to create an Abstract Factory or interfaces or hooks if the system could be configured not to make the real calls on the e-mail object, but to fake them. In order to solve this problem, certain embodiments of the invention add code that is inserted or weaved 107 into the production code base 106 (Fig. 1) that is being tested. The added code may enable hooking fake or mock objects into the production code by calling the Mock framework 110. This framework can decide to return a fake object. The framework may also be able to validate and change the arguments passed into the method.
[0013] Any suitable processor, display and input means may be used to process, display, store and accept information, including computer programs, in accordance with some or all of the teachings of the present invention, such as but not limited to a conventional personal computer processor, workstation or other programmable device or computer or electronic computing device, either general- purpose or specifically constructed, for processing; a display screen and/or printer and/or speaker for displaying; machine-readable memory such as optical disks, CDROMs, magnetic-optical discs or other discs; RAMs, ROMs, EPROMs, EEPROMs, magnetic or optical or other cards, for storing, and keyboard or mouse for accepting. The term "process" as used above is intended to include any type of computation or manipulation or transformation of data represented as physical, e.g. electronic, phenomena which may occur or reside e.g. within registers and/or memories of a computer.
[0014] The above devices may communicate via any conventional wired or wireless digital communication means, e.g. via a wired or cellular telephone network or a computer network such as the Internet.
[0015] The apparatus of the present invention may include, according to certain embodiments of the invention, machine readable memory containing or otherwise storing a program of instructions which, when executed by the machine, implements some or all of the apparatus, methods, features and functionalities of the invention shown and described herein. Alternatively or in addition, the apparatus of the present invention may include, according to certain embodiments of the invention, a program as above which may be written in any conventional programming language, and optionally a machine for executing the program such as but not limited to a general purpose computer which may optionally be configured or activated in accordance with the teachings of the present invention.
[0016] Any trademark occurring in the text or drawings is the property of its owner and occurs herein merely to explain or illustrate one example of how an embodiment of the invention may be implemented.
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] Certain embodiments of the present invention are illustrated in the following drawings:
Fig. 1 is a simplified functional block diagram of a software isolation system constructed and operative in accordance with certain embodiments of the present invention; Fig. 2 is an example of a decision table for .NET code, constructed and operative in accordance with certain embodiments of the present invention;
Fig. 3 is a simplified flowchart illustration for the weaver of Fig. 1, constructed and operative in accordance with certain embodiments of the present invention;
Fig. 4 is a simplified functional block diagram of a profile linker and associated components, constructed and operative in accordance with certain embodiments of the present invention;
Fig. 5 is a simplified functional block diagram of the mock framework of Fig. 5 and associated components, all constructed and operative in accordance with certain embodiments of the present invention;
Fig. 6 is a simplified flow diagram of expectations used by the expectation manager of
Fig. 5, in accordance with certain embodiments of the present invention;
Fig. 7 is a simplified flow diagram of a natural mock setting embodiment of the present invention; Fig. 8 is a simplified flow diagram of a mocked method flow which may be performed by the apparatus of Fig. 1, in accordance with certain embodiments of the present invention; and
Fig. 9 is a simplified diagram of a method by which the mock framework of Fig. 1 sends messages to the tracer of Fig. 1 , all in accordance with certain embodiments of the present invention.
DETAILED DESCRIPTION OF CERTAIN EMBODIMENTS
[0018] Reference is now made to Fig. 1 which is a simplified functional block diagram of a software isolation system constructed and operative in accordance with certain embodiments of the present invention. The run time system 102 is the system that actually runs the code and the tests; this could be an operating system, a scripting system or a virtual machine (as in Java or .NET). The weaver 104 is responsible for inserting the added hooking code into the production code base 106. In each method of the production code the weaver 104 may insert a small piece of code 107 that calls the Mock framework 110 which then decides whether to call the original code or to fake the call. The inserted code 107 can also modify the arguments passed to the production method if required. This is handy for arguments passed by reference.
[0019] The production code base 106 is the code that is to be isolated. There is no need to change the design of this code just to isolate the code. The test code 108 calls the Mock framework 110 in order to change the behavior of the production code. Here the test can set up what to fake, how to validate the arguments that are passed, what to return instead of the original code and when to fail the test. The mock framework 110 is responsible for creating mock objects dynamically and for managing the expectations and behavior of all fake calls. The tracer 112 is typically used to debug and graphically display the methods that are mocked. It is typically used to analyze the faked and original calls of the production code. The configurator 114 is used to set the options of the tool.
[0020] There are several ways in which it is possible to insert code 107 into production code 106 such as but not limited to the following:
(a) Change the executable on disk before running the tests,
(b) Use System IO Hooks to change the executable just before reading it from the disk, (c) Use function hooking techniques,
(d) Use RunTime ClassLoader hooks to change the code before it is run, and (e) Use Profiler and Debug API's to change the code 302 before it is loaded as indicated by arrow 306 in Figs. 3 - 4.
Each method has its pros and cons. The main decision factors are Ease of implementation and Manual vs Automatic as selected by the user.
[0021] Fig. 2 is an example of a decision table for .NET code. The method that was chosen was the Profiler API (Fig 3). In order to solve the issues with the code coverage tool, a Profiler Linker was created. (Fig 4)
[0022] Referring now to Fig. 3, the Weaver 104 registers to the .NET Runtime (CLR) 102 and typically just before the JIT Compiler is run to create machine code 304 from the Byte code 302, instructions pertaining to the added hooking code are inserted as indicated at reference numeral 308. The Weaver 104 typically analyses the signature of the method in order to understand the parameters passed and the return value. This enables writing code that may call the Mock framework 110 to check if the method needs to be faked, and to pass the arguments to the Framework 110 for validating. The code also changes the values of the parameters if required. This is useful for parameters that are passed by reference and for swapping the values for the test (e.g. it is possible to change a filename that is passed as a parameter to point to a dummy file required for the test).
[0023] The weaver 104 is actually a framework that can be used to insert any new code into a code base. The weaver 104 has to change the metadata and add information that points to the correct Mock framework 110. This is typically done by putting the framework 110 in a known directory (GAC) and by parsing the assembly (dll file) to extract relevant information (version and signing signature). Some information is passed from the Mock framework 110 to the Weaver 104, this is typically done using environment variables, although there are other methods available to do this. According to certain embodiments of the present invention, one, some or all of the following may hold:
The weaver 104 must run well in debug mode too and thus it is required to fix the debug to code mapping to ignore the code that is added. Try catch handlers must also be updated to point to the correct positions in the code after the code has been added.
The weaver 104 must take into consideration small and large method headers and event handlers.
Creating new code must take place when the assembly is first loaded.
Signed assemblies can only call other signed assemblies so the Mock framework 110 is signed.
In order to support multiple .NET versions the same Weaver 104 is used and has instructions that enable it to use features of the later version only when that version is available.
The Mock framework 110 assembly should not be weaved as this may lead to a recursive infinite loop.
[0024] Weaving via the MetaData is now described with reference to Fig. 3.
[0025] Another method to isolate code and to insert fake objects is by changing the metadata tables. Each call to a method is defined as 'call <entry in method table>'. Each entry in the method table has the name of the method its type (which is actually an <entry in the type table>) and other information. Each entry in the type table has the name of the type and the assembly that it is defined in (which is an <entry in the assembly table>).
By switching these entries, for example the assembly of the <type> and its <name> all calls to a method can be redirected to a mocked object. Although this method requires building the mock object and handling delegating calls back to the original object, it has the advantage of being less intrusive as it does not change the production code, but only the metadata tables. This is useful in cases where the Run time system 102 has restrictions on the code being inserted.
[0026] An embodiment of the Profiler Linker 401 is now described with reference to Fig. 4. In order to support profiling and code coverage tools that may be required to run together with the tests, a profile linker may be employed. The profile linker 401 loads one or more profile assemblies (COM objects that are suitable to be a profiler) and then calls each profiler sequentially and weaves code from both the assemblies. The profiler linker 401 takes care of profilers from different versions and manages to make sure that the profilers work correctly. According to certain embodiments of the present invention, in order to have the ability to debug the code, there is a need to map the actual code with the source file. When code is added, the map needs to be fixed, and/or the linker 401 changes the code of both assemblies.
[0027] According to further embodiments, in order to automatically set correct environment variables for the testing system to work and to link several profilers together, the testing system may detour a createprocess system api (or any other similar api). This is the api that tells the operating system to start a new process. The testing application may then check if the new process requires mocking or linking. It can do so by looking at the executable name and at the environment variables passed. The system may change/add to these variables and/or may call the linker profiler and/or signal the 2 or more profilers to run together.
[0028] An embodiment of the Mock Framework 110 is now described with reference to Figs. 5 and 6. The mock framework 110 is in charge of managing the expectations. This framework is linked by the test code 108, and expectations are recorded using the frameworks API. The mock framework 110, as shown in Fig. 5, typically comprises an Expectation Manager 550, a Natural Mock Recorder 520, a Dynamic Mock Builder 530, an Argument Validation 540, a Run Time Engine 510 and a Verifier 560.
[0029] The Expectation Manager 550 is a module used to manage the expectations for the fake code. The expectations may be kept in the following way, which is not the only way do to this, but it has its advantages. The Framework 110 holds a map of type expectations 620 that are indexed via the type name. Each Type Expectation is connected to a list of Instance Expectations 630 indexed by the instance and another reference to an Instance Expectation that represents the expectations for all instances.
[0030] All Instance Expectations of the same type reference an Instance Expectation that manages the expectations for all static methods. This is because static methods have no instance. Each Instance Expectation contains a map of Method Expectations 660 that is indexed via the method name. Each method may have the following four lists as shown in Fig. 6:
a default Return Value representing a value to return by default
a queue of return values that should be faked
a queue of conditional values that are used only when the arguments match a queue of conditional default values are used only when the arguments match
[0031] The Method Expectation 660 may first check for a conditional value then a default conditional value, then a regular value and finally the default value. The Null Return Value 680 and Null Instance Expectation 640 are classes that are part of the Null Object pattern. This leads to faster code while running, as there is no need to check if references to Return Value or Instance Expectation are null. Expectations of Generic types are managed each in its own Type Expectation class with the generic parameters as a key, although the non generic Type Expectation points to the generic one. Expectations of Generic methods are managed each in its own Method Expectation class with the generic parameters as a key, although the non generic Method Expectation points to the generic one.
[0032] Two ways to set expectations, namely by the use of Reflective mocks or Natural Mocks, are now described.
a. Reflective mocks use strings names of the methods that are to be mocked. The Framework analyzes the tested assembly, searches for the method and checks that it exists and has the correct return value. The method is then added to the expectations of that method. The test code 108 can then change the behavior of the code and registers what that method should do and how many times. The method may be instructed to return a fake result, throw an exception, or call the original code. The framework may also be instructed to always fake a method (this is the default return), or to fake the next call or number of calls (managed by the Return Value Stack).
There are also hooks to call user supplied code when the method is called. As some methods are instance methods, there are ways to tell the Framework what instance to mock. For example, the Framework can be directed to mock all instances, a future instance or to create the mocked instance so that it can be passed to the production code 106 (this may be managed by the Type Expectation). Methods can also have conditional expectations. Conditional expectations may fake calls only if the arguments passed are the same as those expected. The framework allows expectations to be canceled and changed before the actual code is called. b. Natural Mocks use the actual calls to the methods that are to be mocked. The Framework may be called by these calls (because all the methods are already weaved) and the framework may record that the call is expected, and add it to the list of expectations. The framework allows setting the behavior in the same way as Reflective Mocks. Chained calls are also supported using Natural Mocks. This allows a chain of calls to be mocked in one statement. The Framework may build the return object of one statement in the chain as an input for the next statement in the chain. Of course the framework has to differentiate between creating Dynamic Mocks for incomplete types and real objects with dummy constructor arguments for complete or static objects.
Using Natural Mocks is easier than Reflective Mocks and they are supported by IDE editors that allow code completion and automatic re-factoring, but these cannot account for all cases. Re-Factoring is the process of restructuring code without changing its behavior. There are development tools that help to automate this task. When a method cannot be called from the code (for example if its scope is private), Reflective Mocks must be used. Although Reflective Mocks have the advantage of covering all scopes of the methods, they are more prone to mistakes as the methods are passed as a string.
[0033] Fig. 7 is a data flow diagram showing a Natural Mock Setting Expectations Flow according to an embodiment of the present invention.
[0034] The Dynamic Mock Builder 530 is used to create new objects in a dynamic assembly. This creates real obj ects out of incomplete classes (with abstract methods or interfaces). These objects can then be used and passed to the production code, so that when methods are called the Run Time Engine 510 may return fake results to the created methods. These objects are built using the standard Reflection library.
[0035] The Argument Validation 540 is responsible for verifying that the arguments passed are those that were expected. This is done using a hook that actually does the validation. The Arguments passed and those expected are sent to a validation method that checks different attributes of the object. The attributes, which may be of virtually unlimited scope, may, for example, indicate that the objects are the same or that the .Equals() method is true. The framework 110 has a predefined group of argument validators including string comparisons, Group and Sets comparisons, which verify that the object is being faked by the framework. The test code 108 can register a customized validator if this is required.
[0036] When Natural Mocks are used, the arguments passed to the recording method are used to validate the arguments, unless explicitly overridden. The framework 110 also allows setting arguments of the mocked methods. This actually changes the values of the arguments before the actual code is called. This is useful for arguments that are passed by reference, so that their values can be changed before they are returned and fake [out] arguments.
[0037] The run time engine 510 is called from the code weaved into the production code. The Run Time engine 510 checks to see if the specific type, instance and method should be faked. If they are, the code may validate the arguments and return the fake return value. The Run Time Engine 510 checks the arguments to see if a conditional expectation should be used. The engine also calls the argument validation, and when the arguments are not valid the engine may throw an expectation. There are cases where throwing the expectation is not enough and, when configured correctly, these validation errors may appear at the verifying stage too.
[0038] Performance is an issue for the Run Time engine 510 as it is run for every method called. One way to solve this is to check if the method is faked; this returns quickly if no mocks have been created or if the type is not mocked. Only after knowing that the method is mocked, are the arguments passed and validated, since passing the argument can take time as they are all required to be encapsulated within an object. When Natural Mocks are used the Run Time Engine 510 passes each call to the Natural Mock Recorder.
[0039] A flow diagram of the Mocked Method Flow described herein is shown in Fig. 8.
[0040] In order for the runtime engine 510 to map the called code to the correct mock expectation the Engine 510 may employ the type, method, instance and type generic and method generic parameters. The last two are for generic specific code only and with them it is possible to map the correct expectations. The engine receives this information from the weaver 104 that analyzed the metadata of the code. When a new instance is created and its constructor is called, the Run Time Engine 510 checks if expectations contain mocks for the new instance. This way the Engine can manage mocking objects that are created after the expectations are set (Future Objects).
[0041] A static constructor is called once for each type. When a static constructor is called, the Run Time Engine 510 remembers that this was mocked. Then when a method of that type is called and the type is not mocked any more, the static constructor may be called. This ensures that mocking the static constructor in one test will not affect another test.
[0042] The verifier is called at the end of the test and throws errors when not all the expected calls are made or when an argument validator fails. The verifier can wait till all expected mocks are completed. This is a feature that helps test multithreaded code, where the tested code runs asynchronically in another thread.
[0043] In certain embodiments of the invention, the framework must run in all .NET versions and uses reflection methods to call the newer version API from the old version. Re the Production code base 106, nothing has to change here. The test code 108 calls the Mock Framework API in order to change the behavior of the production code. The tracer 112 is used to debug and graphically display the methods that are mocked. It is used to analyze the faked and original calls of the production code. Mocking of future objects can be a bit confusing, and the tracer 1 12 helps track these issues.
[0044] Fig. 9 show the Mock framework 110 sending messages to the tracer 1 12 process.
[0045] The configurator 114 is used to configure the behavior of the Framework 110. Using the Configurator 114 it is possible to link a code coverage tool with the mock framework 1 10. This may be done by changing the registry key of the coverage tool to point to the Profile Linker 401. The Linker 401 then loads both the coverage tool and the mock framework 1 10.
[0046] Advantages of certain embodiments of the present invention include that it is much easier to verify the code base of an application. There is no need to perform pre-compile steps, or to create special designed code to be able to isolate the code in order for the code to be testable. For example, suppose a developer had the following production code: Dogs.GetDog("rusty"). Tail. Wag().Speed(5);
[0047] This actually fetches the dog from somewhere in the Internet. Instead of changing the code to be able to insert a fake dog and setting all the expectations on the different methods, using certain embodiments of the invention may enable the code to be isolated by writing:
MockTheFollowingO ;
Dogs.GetDogC'rusty"). Tail. Wag().Speed(5);
CheckArgumentsO;
EndMockingO;
[0048] In contrast, in the absence of the present invention, the following may have been required:
Write a framework allowing Dogs to fetch from a fake Internet.
Create a fake Internet
Set Dogs to use the fake Internet
Return a fake Dog when "rusty" is called Return a fake Tail of "rusty"
Make sure that the tail is wagging
Make sure that the wag was set to correct speed.
The test code 108 would look like this (the production code changes are not shown):
FakeDoglnternet fakelnternet = new FakeDogInternet();
Dogs . S etlnternet( fakelnternet) ;
FakeDog fakeDog= new FakeDogO;
fakeInternet.ExpectCall("GetDog");
CheckArguments( "rusty" ) ;
Return(fakeDog);
FakeTail fakeTail = new FakeTail();
fakeDog.ExpectGetProperty( "Tail" ) ;
Return(fakeTail);
FakeWagger fakeWagger = new FakeWagger();
fakeTail.ExpectCall("Wag").Return(fakeWagger);
fakeWagger.ExpectCall( " Speed") ;
CheckArguments(5 ) ;
[0049] The following interfaces would need to be created:
IDoglnternet
IDog
ITail
IWagger
[0050] The following implementation would need to be created (this can be done with a dynamic mock framework 110):
FakeDoglnternet
FakeDog
FakeTail
FakeWagger [0051] The following public method may be in the production code: Dogs . S etlnternet( ) .
[0052] An implementation of an embodiment of the invention for .NET code is now described. Provided is the following static method that returns the current time.
Original Code
public static DateTime get_Now()
{
// This is just an example..
return System.DateTicks.ToLocalTime();
}
This is actually compiled to the following ByteCode:
call System: :get_DateTicks ()
stloc.O
ldloca.s timel
call instance DateTime: :ToLocalTime()
ret
[0053] Before the ByteCode is run the weaver 104 may add code to the ByteCode that mimics the following code may be added to the original code, it being emphasized that the weaver 104 adds code to directly to the ByteCode, the original code being unaffected. The equivalent high level language is shown for clarity:
public static DateTime get_Now()
{
// Are we mocked?
if (MockFramework.isMocked("DateTime.get_Now")
{
// Yes, get the fake return value
object fakeReturn = MockFramework.getReturn("DateTime.get_Now"); // should we Continue with original code?
if (!MockFramework.shouldCallOriginal(mockReturn))
{
return (DateTime)fakeReturn;
}
}
return System.DateTicks.ToLocalTime();
}
[0054] Actually add the following byte code may be added:
ldstr "DateTime.getNow"
call MockFramework.isMocked
brfalse.s label 1
ldstr "DateTime.getNow"
call MockFramework.getReturn
dup
brtrue.s 0x07
unbox DateTime
Ildind.il
ret
pop
label 1: call System: :get_DateTicks ()
stloc.O
ldloca.s timel
call instance DateTime: :ToLocalTime()
ret
[0055] The stack may be used to keep the mockReturn object instead of a local variable. This saves the weaver 104 from defining the variable in the metadata. This helps to test the code. Now that this is in place it is possible to test that the code that counts the number of days in the current month works for also leap years. Following is an example of one test, showing the code to be tested:
// List of days in each month
int[] days_in_month - {31,28,31,30,31,30,31,31,30,31,30,31 };
public int CalculateDayInCurrentMonth()
{
DateTime now = DateTime.Now;
int month = now.get_Month();
return days_in_month [month];
}
[0056] Following this, the user wishes to test that it works for leap years. DateTime.Now is isolated and made to return a fake date, the leap year date. As the system can be isolated, the MockFramework can be instructed to return a fake date
DateTime leapDate - new DateTime("29-Feb-2004");
// Fake next DataTime.Now, will return 29-Feb-2004
MockFramework.Mock(DateTime.Now).ToReturn(leapDate);
// run the method under test
int actualDays = CalculateDayInCurrentMonth();
// make sure that the correct amount was recived
Assert.AreEqual(29, actualDays);
[0057] Verifying Calls: The mechanism can be used to test that a certain call was actually made. In the previous test DateTime.Now might never even be called. As the Mock framework 110 counts the calls made, it can now be verified that the expected calls were actually made.
// fail if we haven't called all expectations MockFramework. Verify ThatAllExpectedCallsWhereMade();
[0058] Verifying Arguments: Some scenarios require that the arguments that are passed are validated. To support this, the arguments to the MockFramework must be sent for verifying. Given Original Code:
public static void Log(int severity,string message) {
Console. WriteLine(severity.ToString()+" "+message);
}
[0059] the Weaved code 107 may be:
public static void Log(int severity,string message)
{
if (MockFramework.isMocked("DateTime.IsSame")
{
// Yes, get the fake return value and validate the arguments
object fakeReturn = MockFramework.getReturn("DateTime.IsSame",
severity, message);
// should we Continue with original code?
if (!MockFramework.shouldCallOriginal(mockReturn))
{
return;
}
}
Console. WriteLine(severity.ToString()+" "+message);
}
[0060] This helps to test the code. Now that this is in place it is possible to test that our code Logs the correct message. Following is an example of one test. // Fake next Log,
MockFramework.Mock(Logger.Log(l,"An Error messag
ToReturn(leapDate).CheckArguments();
// run the method under test
RunAMethodThatCallsLog ();
// we will fail if Log is called with other arguments
[0061] Ref and Out Arguments: Some arguments are changed by the method and are passed back to the caller. The following shows how the code is weaved.
Given Original Code:
public static bool OpenFile(string fileName, out File file) {
file = new File (fileName);
return file.Open();
} the Weaved code 107 may be:
public bool OpenFile(string fileName, out File file)
{
if (MockFramework.isMocked("IO.OpenFile")
{
// Yes, get the fake return value and validate the arguments
object fakeReturn = MockFramework.getReturn("IO.OpenFile",
fileName, file);
// fake first arg
if (MockFramework.shouldChangeArgument( 1))
{
fileName = (string)MockFramework.getArgument(l);
} // fake 2nd arg
if (MockFramework. shouldChangeArgument(2))
{
file = (File)MockFramework.getArgument(2);
}
// should we Continue with original code?
if (!MockFramework.shouldCallOriginal(mockReturn))
{
return (bool) fakeReturn;
}
}
Console. WriteLine(severity.ToString()+" "+message);
}
This helps to test the code. It is now possible to isolate the OpenFile. Following is an example of one test:
// Fake next OpenFile and open a test File,
File testFile = new File("testExample");
MockFramework.Mock(IO.OpenFile("realfile", out testFile)).
ToReturn (true).CheckArguments();
}
// run the method under test
RunAMethodReadsTheFile ();
// we will read the fake file and not the real file, but fail if the real file was not passed
[0062] Modern languages support the notation of Generics. Using Generics allows the same logic to run with different types. A Stack is a classic example. In order to support mocking, only certain types of generic code, information about the generic parameters must be passed to the Mock framework 110. There may be two kinds of generic parameters: Type Generic - these are types that stay the same for all methods; and Method Generics - these are types that stay the same for one method. These types are passed to the MockFramework.getReturn method.
The Original Code may be:
public static void DoSomething<MethodType>(MethodType action,ClassType message) {
action.Perform(message);
}
The Weaved code 107 may include:
public static void DoSomething<MethodType>(MethodType action,ClassType message)
{
if (MockFramework.isMocked("Namespace.DoSomething")
{
Type typeGenerics = new Type[] { ClassType };
Type methodGenerics = new Type[] { MethodType } ;
// Yes, get the fake return value and validate the arguments
object fakeReturn = MockFramework.getReturn("DateTime.IsSame",
typeGenerics, methodGenerics, severity, message);
// should we Continue with original code?
if (!MockFramework.shouldCallOriginal(mockReturn))
{
return;
}
}
action.Perform(message);
Figure imgf000025_0001
[0063] Suppose the user has both class Base with method Count() and also a class, Derived, that is derived from base. When calling the Derived.Count() method, the user is actually calling the Base.Count method. In order to be able to mock Count() only for the derived class, the user needs to know what the class of the method is. This is why the user passes a context with the actual instance to the Mock framework 110. The Weaved code 107 may now look like this:
public static int Count()
{
if (MockFramework.isMocked("Base.Count")
{
// pass this so we can tell if this is being called
// from Base or Derived
object fakeReturn = MockFramework.getReturn("Base.Count",
this);
// should we Continue with original code?
if (!MockFramework.shouldCallOriginal(mockReturn))
{
return;
}
}
action.Perform(message);
}
[0064] According to some embodiments of the present invention, in order to Fake pinvoke methods that are native calls, when a module loads, for each native call line in the metadata, a new method may be added in the metadata that points to the original method (eg, for CreateProcess add a new method »CreateProcess.). Optionally, "»" may be used in the name, as it is legal for il but not for higher level languages. The original line may then be changed to point to a new allocated piece of code that simply calls the new method just defined. In this manner, all calls to the plnvoke method will now be directed to the new method. The new method can now be faked as described herein in relation to normal methods.
[0065] According to some embodiments of the present invention, When a static constructor is called, it may be saved. Subsequently, when a clean up of the system between tests is requested/desired, all the static constructors may be re- invoked. For loaded types that don't have static constructors - all the static fields may be reset. In order to make sure that all types have been identified - all types loaded in the system may be run. According to further embodiments, it is possible to tell if a type has been loaded by:
i. a signal from a profiler,
checking the address of the constructor to see if it points to the JIT or the real method, and/or
using reflection to see if the type/types used by the type are already loaded, by using loadreflection only, for example.
[0066] After an instance of a given method or function is mocked, it may be left in a bad/improper state and, therefore, should not be used in the system after the test. Such methods or functions may be referred to as "stale mocks". To ensure that stale mocks are not re-used by the system, a list of used mock instances may be stored. Whenever an instance is used it may be tested against that list, and fail if it is a stale mock, i.e. on the list.
[0067] It is appreciated that software components of the present invention including programs and data may, if desired, be implemented in ROM (read only memory) form including CD-ROMs, EPROMs and EEPROMs, or may be stored in any other suitable computer-readable medium such as but not limited to disks of various kinds, cards of various kinds and RAMs. Components described herein as software may, alternatively, be implemented wholly or partly in hardware, if desired, using conventional techniques. [0068] Features of the present invention which are described in the context of separate embodiments may also be provided in combination in a single embodiment. Conversely, features of the invention which are described for brevity in the context of a single embodiment may be provided separately or in any suitable subcombination.
Appendix A
Hereinafter is presented the Provisional Application # 60/826,759, from which the present Application draws priority. Accordingly, the following explanations should be considered part of the present disclosure.
Method and System for Isolating Software Components
TABLE OF CONTENTS
METHOD AND SYSTEM FOR ISOLATING SOFTWARE COMPONENTS
ABSTRACT
Introduction
It would be easier
SOLUTION
Run Time System
Weaver
Production Code Base
Test Code
Mock Framework
Tracer
Configurator
DETAILED DESCRIPTION
Weaver
Weaving via the MetaData
Profiler Linker
Mock Framework
Production Code Base
Test Code
Tracer
Configurator
THE ADVANTAGES OF THE SOLUTION
PRACTICAL IMPLEMENTATION
Basic Scenario
Verifying Calls
Verifying Arguments
Ref and Out Arguments
Generics
Derived Classes Abstract
This invention discloses a method that enables isolating software components, without changing the production code.
Introduction
Validating software is a complex problem that grows exponentially as the complexity of the software grows. Even a small mistake in the software can cause a large financial cost.
In order to cut down on these costs, software companies test each software component as they are developed or during interim stages of development.
Testing isolated software components give better testing results as the coverage of the tests is much higher and the complexity does not grow exponentially. This is a basic requirement for validating a software component.
In order to isolate the components, there is a need to design the program that utilizes the software components in such a way that the components can be changed. This is part of a pattern called Inversion of Control or Dependency Injection. For example when validating that software behaves correctly on the 29th of February, there is a need to change the computer system's date before running the test. This is not always possible (due to security means) or wanted (it will disturb other applications). The method used today to verify this is by wrapping the system call to get the current date with a new class. This class will have the ability to return a fake date when required. This will allow injecting the fake date into the code being tested for, and enable validating the code under the required conditions. There are many cases where isolating the code base and injecting fake data are required. Here are a few examples:
Fake a behavior that is scarce. (Dates, Out of Memory)
Fake slow running components. (Database, Internet)
Fake components that are difficult to set up (send e-mail, ftp) Other cases will require a more complex solution. When faking a complete set of API's (for example: faking sending an email) there is a need to build a framework that enables isolating the complete API set. This means that the code will now have to support creating and calling two different components. One way to do this is to use the Abstract Factory Pattern. Using this pattern, the production code should never create the object (that needs to be faked for tests). Instead of creating the object, the Factory is asked to create the object, and the code calls the methods of the object that the factory created. The factory can then choose what object to create: a real one or a fake one. This requires using an interface that both clients (real and fake) need to implement. It also requires creating a complex mechanism that will allow the factory to choose what object to create and how to do so. This is done mainly through configuration files although it can be done in code too.
When testing using fake objects, it is important to validate the arguments passed to the fake object. In this way it is possible to validate that an e-mail that is supposed to be sent has the correct subject and address. The e-mail, of course, is not actually sent. There is no need to validate that component again, as the e-mail tests are done in isolation for the e-mail object.
It is possible to write the fake object and methods by hand or to use a mock framework. A mock framework will dynamically create a fake object that implements the same interface of the real obj ect (the same interface that we created using the Abstract Factory), and has the ability to define the behavior of the object and to validate the arguments passed to the object.
Although these methods work and enable testing the code base, they also require that the code is designed to be testable. This cannot always be done, as sometimes the code calls legacy code, which we don't want to change. Legacy code refers to any code that was not designed to allow insertions of fake objects. It would be too costly to rewrite them, as this will lead to an increase in development time just to make the code testable. The more complex the code the harder it is to maintain. Designing the code to be testable, puts constraints into the design that are not always compatible with the production design. For example, the code may be required to implement hooks that enable changing the actual object to a fake one. This hook can lead to misuse and hard-to-debug code, as it is intended for testing but it is in the production code.
It would be easier
It would be easier to test our code if there were no need to change the design for testability, but just be able to isolate and fake the code required to validate our code. For example it would be easier if the system could be programmed to fake the real e-mail object. There would be no need to create an Abstract Factory or interfaces or hooks if the system could be configured not to make the real calls on the e-mail object, but to fake them.
Solution
In order to solve this problem, the invention adds code that is inserted or weaved into the production code base that is being tested. The added code will enable hooking fake or mock objects into our production code by calling the Mock framework. This framework can decide to return a fake object. The framework will also be able to validate and change the arguments passed into the method.
Fig. 1 shows a Software Isolation System Run Time System
The run time system is the system that actually runs the code and the tests; this could be an operating system, a scripting system or a virtual machine (as in Java or .NET)
Weaver
The weaver is responsible for inserting the added hooking code into the production code base. In each method of the production code the weaver inserts a small piece of code that calls the Mock Framework which then decides whether to call the original code or to fake the call. The inserted code can also modify the arguments passed to the production method if required. This is handy for arguments passed by reference.
Production Code Base
The production code base is the code that we want to isolate. There is no need to change the design of this code just to isolate the code.
Test Code
The test code calls the Mock Framework in order to change the behavior of the production code. Here the test can set up what to fake, how to validate the arguments that are passed, what to return instead of the original code and when to fail the test.
Mock Framework
The mock framework is responsible for creating mock objects dynamically and for managing the expectations and behavior of all fake calls. Tracer
The tracer is used to debug and graphically display the methods that are mocked. It is used to analyze the faked and original calls of the production code.
Configurator
The configurator is used to set the options of the tool.
Detailed Description
Weaver
The weaver is the heart of the solution. There are several ways in which it is possible to insert code into production code. Here are some options.
Change the executable on disk before running the tests.
Use System 10 Hooks to change the executable just before reading it from the disk Use function hooking techniques.
Use RunTime ClassLoader hooks to change the code before it is run.
Use Profiler and Debug API's to change the code before it is loaded.
Each method has its pros and cons. The main decision factors are:
Ease of implementation
Manual vs Automatic choosing on the users side
Here is the decision table for .NET code
Figure imgf000034_0001
The method that was chosen was the Profiler API. In order to solve the issues with the code coverage tool a Profiler Linker was created.
The Weaver registers to the .NET Runtime (CLR) and just before the JIT Compiler is run to create machine code from the Byte code, instructions pertaining to the added hooking code are inserted.
The Weaver analyses the signature of the method in order to understand the parameters passed and the return value. This enables writing code that will call the Mock Framework to check if the method needs to be faked, and to pass the arguments to the Framework for validating. The code also changes the values of the parameters if required. This is useful for parameters that are passed by reference and for swapping the values for the test (e.g. it is possible to change a filename that is passed as a parameter to point to a dummy file required for the test).
The weaver has to change the metadata and add information that points to the correct Mock Framework. This is done by putting the framework in a known directory (GAC) and by parsing the assembly (dll file) to extract relevant information (version and signing signature). Some information is passed from the Mock Framework to the Weaver, this is done using environment variables, although there are other methods available to do this.
Special interest:
The weaver must run well in debug mode too and thus we have to fix the debug to code mapping to ignore the code we added.
Try catch handlers must also be updates to point to the correct positions in the code after the code has been added
The weaver must take into consideration small and large method headers and event handlers.
Creating new code must take place when the assembly is first loaded Signed assemblies can only call other signed assemblies so the Mock Framework is signed
In order to support multiple .NET versions the same Weaver is used and has instructions that enable it to use features of the later version only when that version is available.
The Mock Framework assembly should not be weaved as this will lead to a recursive infinite loop.
The weaver is actually a framework that can be used to insert any new code into a code base.
Fig. 3 shows a Weaver Flow Chart
Weaving via the MetaData
Another method to isolate code and to insert fake objects is by changing the metadata tables. Each call to a method is defined as 'call <entry in method table>'. Each entry in the method table has the name of the method its type (which is actually an <entry in the type table>) and other information. Each entry in the type table has the name of the type and the assembly that it is defined in (which is an <entry in the assembly table>).
By switching these entries, for example the assembly of the <type> and its <name> all calls to a method can be redirected to a mocked object. Although this method requires building the mock object and handling delegating calls back to the original object, it has the advantage of being less intrusive as it does not change the production code only the metadata tables. This is useful in cases where the Run Time System has restrictions on the code being inserted.
Profiler Linker
In order to support profiling and code coverage tools that are required to run together with the tests, a profile linker is required. The profile linker loads one or more profile assemblies (COM objects that are suitable to be a profiler) and then calls each profiler sequentially and weaves code from both the assemblies.
Fig. 4 shows a Profile Linker Diagram
The profiler linker takes care of profilers from different versions and manages to make sure that the profilers work correctly.
Special interest:
In order to have the ability to debug the code, there is a need to map the actual code with the source file. When code is added the map needs to be fixed.
The linker changes the code of both assemblies.
Mock Framework
The mock framework is in charge of managing the expectations. This framework is linked by the test code, and expectations are recorded using the frameworks API. There are a few parts to the mocking framework.
Expectation Manager
Natural Mock Recorder
Dynamic Mock Builder
Argument Validation
Run Time Engine
Verifier
Fig. 5 shows a Mock Framework Expectation Manager
This module is used to m the expectations for the fake code. The expectations are kept in the following way, this is not the only way do to this, but it has its advantages.
The Framework holds a map of type expectations that are indexed via the type name.
Each Type Expectation is connected to a list of Instance Expectations indexed by the instance and another reference to an Instance Expectation that represents the expectations for all instances.
Fig. 6 shows an Expectation Manager
All Instance Expectations of the same type reference an Instance Expectation that manages the expectations for all static methods. This is because static methods have no instance.
Each Instance Expectation contains a map of Method Expectations that is indexed via the method name.
Each method has 4 lists.
a default Return Value representing a value to return by default
a queue of return values that should be faked
a queue of conditional values that are used only when the arguments match a queue of conditional default values are used only when the arguments match
The Method Expectation first checks for a conditional value then a default conditional value, then a regular value and finally the default value.
The Null Return Value and Null Instance Expectation are classes that are part of the Null Object pattern. This leads to faster code while running, as there is no need to check if references to Return Value or Instance Expectation are null.
Expectations of Generic types are managed each in its own Type Expectation class with the generic parameters as a key, although the non generic Type Expectation points to the generic one. Expectations of Generic methods are managed each in its own Method Expectation class with the generic parameters as a key, although the non generic Method Expectation points to the generic one.
There are two ways to set expectations, namely by the use of Reflective mocks or Natural
Mocks as will now be described. Reflective Mocks:
Reflective mocks use strings names of the methods that are to be mocked. The Framework analyses the tested assembly and searches for the method and checks that it exists and has the correct return value. The method is then added to the expectations of that method.
The test code can then change the behavior of the code and registers what that method should do and how many times.
We can tell the method to: Return a fake result, throw an exception or call the original code. We can also tell the framework to always fake a method (this is the default return), or to fake the next call or number of calls (managed by the Return Value Stack).
There are also hooks to call user supplied code when the method is called.
As some methods are instance methods, there are ways to tell the Framework what instance to mock. For example, the Framework can be directed to mock all instances, a future instance or to create the mocked instance so that we can pass it to the test code (this is managed by the Type Expectation).
Methods can also have conditional expectations. Conditional expectations will fake calls only if the arguments passed are the same as those expected.
The framework allows expectations to be canceled and changed before the actual code is called. Natural Mocks
Natural Mocks use the actual calls to the methods that are to be mocked. The Framework will be called by these calls (because all the methods are already weaved) and the framework will record that the call will be expected, and add it to the list of expectations. The framework allows setting the behavior in the same way as Reflective Mocks.
Chained calls are also supported using Natural Mocks. This allows a chain of calls to be mocked in one statement. The Framework will build the return object of one statement in the chain as an input for the next statement in the chain. Of course the framework has to differentiate between creating Dynamic Mocks for incomplete types and real objects with dummy constructor arguments for complete or static objects
Using Natural Mocks is easier than Reflective Mocks and they are supported by IDE editors that allow code completion and automatic re-fractoring, but these cannot account for all cases. Re-Factoring is the process of restructuring code without changing its behavior. There are development tools that help to automate this task
When a method cannot be called from the code (for example its scope is private), Reflective Mocks must be used. Although Reflective Mocks have the advantage of covering all scopes of the methods, they are more prone to mistakes as the methods are passed as a string.
Fig. 7 shows a Natural Mock Setting Expectations Flow Dynamic Mock Builder
The Dynamic Mock Builder is used to create new objects in a dynamic assembly. This creates real objects out of incomplete classes (with abstract methods or interfaces). These objects can then be used and passed to the production code, so that when methods are called the Run Time Engine will return fake results to the created methods.
These objects are built using the standard Reflection library.
Argument Validation
The Argument Validation is responsible for verifying that the arguments passed are what we expected. This is done using a hook that actually does the validation. The Arguments passed and those expected are sent to a validation method that checks different attributes of the object. The attributes, which may be of virtually unlimited scope, may, for example, indicate that the objects are the same or that the .Equals() method is true. The framework has a predefined group of argument validators including string comparisons, Group and Sets comparisons and even verifying that the object is being faked by the framework. The test code can register a customized validator if this is required.
When Natural Mocks are used, the arguments passed to the recording method are used to validate the arguments, unless explicitly overridden.
The framework also allows setting arguments of the mocked methods. This actually changes the values of the arguments before the actual code is called. This is useful for arguments that are passed by reference, so we can change their values before they are returned and fake [out] arguments.
Run Time Engine
The run time engine is called from the code weaved into the production code. The Run Time engine checks to see if the specific type, instance and method should be faked. If they are, the code will validate the arguments and return the fake return value.
The Run Time Engine checks the arguments to see if a conditional expectation should be used. The engine also calls the argument validation, and when the arguments are not valid the engine will throw an expectation. There are cases where throwing the expectation is not enough and, when configured correctly, these validation errors will appear at the verifying stage too.
Performance is an issue for the Run Time engine as it is run for EVERY method called. One way to solve this is to check if the method is faked; this returns quickly if no mocks have been created or if the type is not mocked. Only after knowing that the method is mocked, are the arguments passed and validated, since passing the argument can take time as they are all required to be encapsulated within an object.
When Natural Mocks are used the Run Time Engine passes each call to the Natural Mock Recorder.
Fig. 8 shows a Mocked Method Flow
In order for the runtime engine to map the called code to the correct mock expectation the Engine requires the type, method, instance and type generic and method generic parameters. The last two are for generic specific code only and with them it is possible to map the correct expectations. The engine receives this information from the weaver that analyzed the metadata of the code.
When a new instance is created and its constructor is called, the Run Time Engine checks if expectations contain mocks for the new instance. This way the Engine can manage mocking objects that are created after the expectations are set (Future Objects).
A static constructor is called once for each type. When a static constructor is called, the Run Time Engine remembers that this was mocked. Then when a method of that type is called and the type is not mocked any more, the static constructor will be called. This makes sure that mocking the static constructor in one test won't affect another test.
Verifier
The verifier is called at the end of the test and throws errors when not all the expected calls are made or when an argument validator failed. The verifier can wait till all expected mocks are completed. This is a feature that helps test multi-threaded code, where the tested code runs asynchronically in another thread.
Special interest:
The framework must run in all .NET versions and uses reflection methods to call the newer version API from the old version.
Production Code Base
Nothing has to change here
Test Code
The test code calls the Mock Framework API in order to change the behavior of the production code.
Tracer
The tracer is used to debug and graphically display the methods that are mocked. It is used to analyze the faked and original calls of the production code. Mocking of future objects can be a bit confusing, and the tracer helps track these issues.
Fig. 9 shows a Mock framework sending messages to the tracer process. Configurator
The configurator is used to configure the behavior of the Framework. Using the Configurator it is possible to link a code coverage tool with the mock framework. This is done by changing the registry key of the coverage tool to point to the Profile Linker. The Linker then loads both the coverage tool and the mock framework.
The Advantages of the Solution
The advantages of the supplied solution are that it is much easier to verify the code base of an application. There is no need to perform pre-compile steps, or to create special designed code to be able to isolate the code in order for the code to be testable.
Suppose a developer had the following production code.
Dogs.GetDog( "rusty "). Tail. Wag().Speed(5);
This actually fetches the dog from somewhere in the internet. Instead of changing the code to be able to insert a fake dog and setting all the expectations on the different methods, using the invention will enable the code to be isolated by writing:
MockTheFollowing ( ) ;
Dogs . GetDog ( "rusty") . Tail . Wag() . Speed (5) ;
CheckArguments () ;
EndMocking ( ) ;
While in the past the following would have been required:
Write a framework allowing Dogs to fetch from a fake internet.
Create a fake internet
Set Dogs to use the fake internet
Return a fake Dog when "rusty" is called
Return a fake Tail of "rusty" Make sure that the tail is waged
Make sure that the wag was set to correct speed.
The test code will look like this (the production code changes are not shown):
Fa KeDoqlivi.erri t fake Internet = aase Pak Doglnt&rnet ( ) ;
Dogs . Set Internet ( fakelnternet ) ;
FakeDog fakeDog= n w Fak&DogO ;
fakelnternet . ExpectCall ( "GetDog" ) ;
CheckArguments ("rusty") ;
Return ( fakeDog ) ;
akelaii fakeTail = ?ia¾? FakeTail ();
fakeDog . ExpectGetProperty ( "Tail" ) ;
Return (fakeTail) ;
FakeWagger fakeWagger = FakeWagger () ;
fakeTail . ExpectCall ( "Wag" ) . Return ( fakeWagger) ;
fakeWagger . ExpectCall ("Speed") ;
CheckArguments (5) ;
The following interfaces will need to be created:
IDoglnternet
IDog
ITail
IWagger
The following implementation must be created (this can be done with a dynamic mock framework):
FakeDoglnternet
FakeDog
FakeTail
FakeWagger The following public method will be in the production code: Dogs . S etlnternet( ) ;
Practical Implementation
This section will describe a practical implementation of this invention for .NET code.
Basic Scenario
Suppose we have the following static method that returns the current time. Original Code
pul- -..l ie stat ic Dat eTi tae get_Now()
{ return vys ::«;:«-; . DateTicks . ToLocalTime ( ) ;
}
This is actually compiled to the following ByteCode
call System : :get_DateTicks ()
stloc.O
Idloca.s timet
call instance DateTime: :ToLocalTime()
ret
Weaved Code
Before the ByteCode is run the weaver will add code to the ByteCode that mimics the following code added to the original code, it being emphasized that the weaver adds code to directly to the ByteCode, the original code being unaffected. We will show the equivalent high level language for clarity:
ublic si itic Dateline get_Now()
{ kr'r ;Tiew 5 i . isMocked ( "Da- .&Τ1"ε . get_Now" )
{ ob :ct fakeReturn =
ork . getReturn ( "D telime . get_Now" ) ; if ( ! ock Frasnevjo i . shouldCallOriginal (mockReturn) )
{
r atuiT. ( D¾ t eT 1;ne ) fakeReturn ;
}
}
re-:ur:; Syste ■■■ . DateTicks . ToLocalTime ( ) ;
}
Actually we add the following byte code
idstr "DateTime.get ow"
call MockFramework.isMocked
brfalse.s iabell
Idstr "DateTime.getNow"
call MockFramework.getReturn
dup
brtrue.s 0x07
unbox DateTime
Ildind.il
ret
pop
Iabell: call System: :get_DateTicks ()
stloc.O
Idloca.s timel
call instance DateTime: :ToLocalTime()
ret
Note: we use the stack to keep the mockReturn object instead of a local variable. This saves us from defining the variable in the metadata.
How this helps to test our code
Now that this is in place we can test that our code that counts the number of days in the current month works for also leap years.
Here is an example of one test.
Here is the code that we want to test: i -: [ ] days_in_month = {31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 };
public: int CalculateDaylnCurrentMonth ( )
{
Dat&I "S now = Oat:sTiiiie . Now;
int month = now . get_Month ( ) ;
return days_in_month [month ] ;
}
Now we want to test that it works for leap years.
We need to isolate the D¾teTirae. Now and make it return a fake date, the leap year date.
As our system can be isolated we can tell the MookFramework to return a fake date leapDate = new Oa t eTi .¾e ( "29-Feb-2004 " ) ;
M kFra™e«ork .Mock (Oa sTiiiie .Now) . ToReturn (leapDate) ;
int actualDays = CalculateDaylnCurrentMonth () ; Assert .AreEqual (29, actualDays) ;
Verifying Calls
The mechanism can be used to test that a certain call was actually made. In the previous test we might never even call DateTime.Now. As the Mock Framework counts the calls made, we can now verify that the expected calls where actually made.
Moc Framework . VerifyThatAHExpectedCallsWhereMade ( ) ;
Verifying Arguments
Some scenarios require that we validate the arguments that are passed. To support this we send the arguments to the MockFramework for verifying.
Original Code
public static void Log (int severity, string message) {
Cor: so !e . WriteLine (severity .ToString()+" " +message ) ;
}
Weaved Code
public static void Log (int. severity, string message)
{
if (Mockrramewor . isMocked ( "Dat&i'imc . IsSame" )
{
Yes, get the fai'e reeurii value and vaiidale *:he arguments object fakeReturn =
ociFr a.m work . getReturn ( "Da t elisie . I sSame " ,
severity, message);
// should we Con inue v;ith o i in l eo'ie?
if ( ! MockFramewo k: . shouldCallOriginal (mockReturn) )
{
eturn;
}
}
Console . WriteLine ( severity . ToString ( ) +" "+message) ;
} How this helps to test our code
Now that this is in place we can test that our code Logs the correct message Here is an example of one test.
ckFri!-ewc ?: . Mock ( Loader . Log ( 1 , "An Error message") )
ToReturn ( leapDate ) . CheckArguments ( ) ;
RunAMethodThatCallsLog ();
Ref and Out Arguments
Some arguments are changed by the method and are passed back to the caller. Here is how our code is weaved
Original Code
pu >Iic st tic ool OpenFile ( s r: r:i : q fileName, out: File file) {
file = new File (fileName) ;
return file.Open();
}
Weaved Code
public bool OpenFile (str _nq fileName, out F:i le file)
{ if (i¾)ciFia:::s* rk . isMocked ( " 10. OpenFile" )
{ object fakeReturn = MOCK Framework . getReturn ( " 10. OpenFile" , fileName, file) ; if (MockFrasnewcrk . shouldChangeArgument (1) )
{
fileName = ( st r i nq ) MockPra ework . getArgument ( 1 ) ;
} if (MockFrasnewcrk . shouldChangeArgument (2) )
{
file = (Filfc)MockFraraewoxK.. getArgument (2) ;
} if ( ! MockFramewcr k . shouldCallOriginal (mockReturn) )
{
return (bool) fakeReturn;
}
"isole . WriteLine (severity .ToString()+" " +message ) ;
How this helps to test our code
We can now isolate the OpenFile
Here is an example of one test.
File testFile = new File ("testExample") ;
M ckFia-ewci:!! .Mock (10. OpenFile ("realfile", out: testFile) )
ToReturn (true) . CheckArguments ( ) ;
}
RunAMethodReadsTheFile ();
Figure imgf000053_0001
Generics
Modern languages support the notation of Generics. Using Generics allows the same logic to run with different types. A Stack is a classic example. In order to support mocking only certain types of generic code, information about the generic parameters must be passed to the Mock Framework.
There are two kinds of generic parameters:
1. Type Generic - these are types that stay the same for all methods.
2. Method Generics - these are types that stay the same for one method.
These types are passed to the M kFr .":¾work . getReturn method
Original Code
public static voici DoSomething<MethodType> (MethodType
action, ClassType message) !
action . Perform (message) ;
}
Weaved Code
public stat ic void DoSomething<MethodType> (MethodType
action, ClassType message)
{
if (MockFra™.ev;or;i . isMocked ( " ai^es a e . DoSomething" )
{
Type typeGenerics = new Type [ ] { ClassType };
Type methodGenerics = new Type [ ] { MethodType }; object fakeReturn =
Mockrianewor . getReturn ( "Dateline . IsSame ",
typeGenerics, methodGenerics, severity, message); if ( ! MockFr axneworJ . shouldCallOriginal (mockReturn) ) return;
}
action . Perform (message) ;
Derived Classes
Suppose we have class Base with method Count() and a class Derived that is derived from base. When calling the Derived.Count() method, we are actually calling the Base.Count method. In order to be able to mock Count() only for the derived class we need to know what the class of our method is. This is why we pass a context with the actual instance to the Mock Framework.
Our code will now look like this
Weaved Code
public stat. ic int Count ( )
{
if (Mock Fr mewor k . isMocked ( "Β·ΰ ;e . Count" )
{
as this so w- can tell if th s is being c lled object fakeReturn = MoekFramewoi k.. getReturn ( "Baas; . Count", this) ;
/ should we Co;"it.ij"i ;:: with oriqii'oi code?
i f ( ! MockFr«3P¾wor>c . shouldCallOriginal (inockReturn) )
{
return;
}
}
action . Perform (message) ;
}

Claims

Claims
1. A system for providing testing for software applications, said system comprising: a tangible medium containing processor executable software testing code adapted to cause one or more processors to:
at least partially isolate from within a given software application, during runtime, a software component which performs a given function by modifying data within metadata tables associated with the component of the given software application, such that a call to the component is redirected to alternate testing code of the testing application adapted to override behavior of the component; and
test, by use of the processors running the testing code, the given software application, by imposing a fake behavior on the software component, wherein imposing a fake behavior includes removing or replacing an expected behavior of the coupled software component, during runtime.
2. The system according to claim 1, wherein said testing code is adapted to cause the one or more processors to test the given software application by imposing a fake behavior on the software component, without modifying any code of the given software application, by use of access provided by the metadata modification.
3. The system according to claim 1, wherein the given software application does not include interfaces for injection of alternate functions.
4. The system according to claim 1, wherein said testing code further comprises a linker adapted to: (1) load two or more profilers during execution of said testing application, (2) call each of the profilers sequentially, and (3) weave code from each of the profilers.
5. The system according to claim 1, wherein said testing code further comprises a test execution manager adapted to, during software application test execution, intercept object creation instructions from one or more software components being tested and to return to the one or more software components being tested a mocked instance of the object created.
6. The system according to claim 1, wherein said testing code is further adapted to cause the one or more processors to track objects returned from mocked calls and to mock all methods of these objects, recursively, so as to facilitate a chained mock response.
7. The system according to claim 1, wherein overriding behavior of a software component includes generating a mocked response selected from the group consisting of: (1) returning fake data in response to the call; (2) faking a failure of the software component; and (3) providing the software component a fake argument.
8. The system according to claim 1, wherein said testing code is further adapted to cause the one or more processors to track instances of the coupled software component and set different expectations for different instances of the coupled software component.
9. The system according to claim 1, wherein said testing code is further adapted to cause the one or more processors to identify base methods and factor an originating type of a given based method when imposing a fake behavior on the given based method.
10. The system according to claim 4, wherein said testing code is further adapted to cause the one or more processors to detour a createprocess application programming interface in order to automatically set correct environment variables for said linker to properly link the two or more profilers.
11. The system according to claim 1, wherein said testing code is further adapted to cause the one or more processors to add a new method in the metadata tables which points to an original method and then change.
12. The system according to claim 1, wherein said testing code is further adapted to cause the one or more processors to save static constructors as they are called and reinvoke the saved static constructors to clean up between tests.
13. The system according to claim 12, wherein said testing code is further adapted to cause the one or more processors to reset static fields for loaded types during the clean up.
14. The system according to claim 1, wherein said testing code is further adapted to cause the one or more processors to maintain a list of mocked instances, thereby providing for identification of stale mocks.
15. A system for providing profiling, said system comprising:
a tangible medium containing processor executable code adapted to cause one or more processors to:
i. identify when code associated with a given software application is to be modified by two or more applications;
ii. load two or more profile assemblies;
iii. call each of the loaded two or more profile assemblies sequentially;
iv. weave code into the code associated with the given software application from each of the two or more profile assemblies.
16. The system according to claim 15, wherein said processor executable code is further adapted to cause the one or more processors to detour a createprocess application programming interface in order to automatically set correct environment variables for said system to properly link two or more profilers. The system according to claim 15, wherein the given software application does not include interfaces for injection of alternate functions.
A system for providing profiling, said system comprising:
a tangible medium containing processor executable code adapted to cause one or more processors to:
identify when metadata associated with a given software application is to be modified by two or more applications;
load two or more profile assemblies;
call each of the loaded two or more profile assemblies sequentially;
weave code into the metadata associated with the given software application from each of the two or more profile assemblies;
The system according to claim 18, wherein said processor executable code is further adapted to cause the one or more processors to detour a createprocess application programming interface in order to automatically set correct environment variables for said system to properly link two or more profilers.
The system according to claim 18, wherein the given software application does not include interfaces for injection of alternate functions.
PCT/IB2017/050323 2016-01-25 2017-01-22 Methods and systems for isolating software components WO2017130087A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/005,145 US10078574B2 (en) 2006-09-25 2016-01-25 Methods and systems for isolating software components
US15/005,145 2016-01-25

Publications (1)

Publication Number Publication Date
WO2017130087A1 true WO2017130087A1 (en) 2017-08-03

Family

ID=59397524

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2017/050323 WO2017130087A1 (en) 2016-01-25 2017-01-22 Methods and systems for isolating software components

Country Status (1)

Country Link
WO (1) WO2017130087A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107807841A (en) * 2017-10-18 2018-03-16 中国平安人寿保险股份有限公司 Server analogy method, device, equipment and readable storage medium storing program for executing
CN109359036A (en) * 2018-09-25 2019-02-19 腾讯科技(深圳)有限公司 Test method, device, computer readable storage medium and computer equipment
CN109857656A (en) * 2019-01-18 2019-06-07 深圳壹账通智能科技有限公司 Adaptation method, device, computer equipment and storage medium based on test

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6484276B1 (en) * 1999-10-25 2002-11-19 Lucent Technologies Inc. Method and apparatus for providing extensible object-oriented fault injection
US20040220947A1 (en) * 2003-05-02 2004-11-04 International Business Machines Corporation Method and apparatus for real-time intelligent workload reporting in a heterogeneous environment
US20060265693A1 (en) * 2005-05-20 2006-11-23 Microsoft Corporation Methods and apparatus for software profiling
US20070033443A1 (en) * 2005-08-04 2007-02-08 Microsoft Corporation Unit test generalization
US8370941B1 (en) * 2008-05-06 2013-02-05 Mcafee, Inc. Rootkit scanning system, method, and computer program product

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6484276B1 (en) * 1999-10-25 2002-11-19 Lucent Technologies Inc. Method and apparatus for providing extensible object-oriented fault injection
US20040220947A1 (en) * 2003-05-02 2004-11-04 International Business Machines Corporation Method and apparatus for real-time intelligent workload reporting in a heterogeneous environment
US20060265693A1 (en) * 2005-05-20 2006-11-23 Microsoft Corporation Methods and apparatus for software profiling
US20070033443A1 (en) * 2005-08-04 2007-02-08 Microsoft Corporation Unit test generalization
US8370941B1 (en) * 2008-05-06 2013-02-05 Mcafee, Inc. Rootkit scanning system, method, and computer program product

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FREEMAN ET AL.: "Mock Roles, Not Objects", COMPANION TO THE 19TH ANNUAL ACM SIGPLAN CONFERENCE ON OBJECT-ORIENTED PROGRAMMING SYSTEMS, LANGUAGES, AND APPLICATIONS , OOPSLA, 2004, pages 1 - 11, XP055123060, Retrieved from the Internet <URL:http://www.imock.org/articles.html> *
HOSSAIN, MOCKING CONSTRUCTORS WITH JUSTMOCK, 13 December 2012 (2012-12-13), pages 1 - 11, Retrieved from the Internet <URL:http://www.telerik.com/blogs/mocking-constructors-with-justmock> *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107807841A (en) * 2017-10-18 2018-03-16 中国平安人寿保险股份有限公司 Server analogy method, device, equipment and readable storage medium storing program for executing
CN107807841B (en) * 2017-10-18 2020-10-09 中国平安人寿保险股份有限公司 Server simulation method, device, equipment and readable storage medium
CN109359036A (en) * 2018-09-25 2019-02-19 腾讯科技(深圳)有限公司 Test method, device, computer readable storage medium and computer equipment
CN109857656A (en) * 2019-01-18 2019-06-07 深圳壹账通智能科技有限公司 Adaptation method, device, computer equipment and storage medium based on test

Similar Documents

Publication Publication Date Title
US10078574B2 (en) Methods and systems for isolating software components
US9251041B2 (en) Method and system for isolating software components
US8954929B2 (en) Automatically redirecting method calls for unit testing
US9471282B2 (en) System and method for using annotations to automatically generate a framework for a custom javaserver faces (JSF) component
US7320123B2 (en) Method and system for detecting deprecated elements during runtime
US8001530B2 (en) Method and framework for object code testing
US8171460B2 (en) System and method for user interface automation
US8607208B1 (en) System and methods for object code hot updates
US10331425B2 (en) Automated source code adaption to inject features between platform versions
CN103970659B (en) Android application software automation testing method based on pile pitching technology
US9965257B2 (en) Automatic configuration of project system from project capabilities
US20100325607A1 (en) Generating Code Meeting Approved Patterns
US20170235670A1 (en) Method for tracking high-level source attribution of generated assembly language code
WO2017130087A1 (en) Methods and systems for isolating software components
Smith et al. Value-dependent information-flow security on weak memory models
Abercrombie et al. jContractor: Bytecode instrumentation techniques for implementing design by contract in Java
Lopes et al. Unit testing aspectual behavior
Bocic et al. Symbolic model extraction for web application verification
WO2013062956A1 (en) Automatically testing a program executable on a graphics card
Cazzola et al. Dodging unsafe update points in java dynamic software updating systems
US20050034120A1 (en) Systems and methods for cooperatively building public file packages
Tan Security analyser tool for finding vulnerabilities in Java programs
Császár et al. Building fast and reliable reverse engineering tools with Frida and Rust
Vincenzi et al. JaBUTi–Java Bytecode Understanding and Testing
Marick Generic Coverage Tool (GCT) User’s Guide

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17743802

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17743802

Country of ref document: EP

Kind code of ref document: A1