Now that more Siebel projects around the world are on embarking on upgrading their Siebel Applications to Siebel Open UI. Its about time we advance this concept into something more tangible that Clients can benefit from.
Picture this... as part of the Open UI upgrade, you are required to navigate to 1000+ views in Siebel and test for WCAG defects in all 3 major browsers. That isn't so exciting.
Testing for Open UI defects requires a high level of thoroughness to ensure that entire your Application is compliant, and remains compliant in the future. The move to Open UI will expose poor configuration practices that might have been passable in HI, but will break in Open UI. Open UI will also introduce defects as a side effect of the upgrade.
The simple approach to this problem is to brute force it, and assign a team of developers to navigate to each view to analyse for technical defects. But this isn't really viable, as a long term strategy, a smarter approach is to use web automation.
Web automation brings to mind images of robots that are used to scrape web sites, harvest email addresses and index content, but Siebel developers have a more important itch to scratch: Testing the Siebel UI.
The obvious use case is to run continuous integration testing, to ensure that new builds are thoroughly tested over night. A more advanced crawler can be built to perform functional testing of the applications main areas, but the scope of this article is to show how you can build your own Open UI crawler that can be used to programatically navigate to each view, and optionally validate your application.
Theres no shortage of tools in the market that is available to perform this sort of work, but if we narrow our criteria to open source web automation solutions, the Selenium web driver makes a pretty good choice, as it's also set to become a W3C recommendation. Selenium works with all the major browsers, is compatible with your favorite programming language, and it is also free.
This article will provide you with a sample application that implements Open UI automation with Ruby, but you can extract the lessons learnt from this article, and implement them in your language of choice.
I've chosen to use Ruby as my language, because there is solid support for web automation. Ruby has gems that allow the developer to easily parse the DOM, make selections using CSS or XPath syntax, deal with dialog boxes, and take screen shots of problem views. Combining with this Watir, which is a selenium wrapper in Ruby, allows the developer to build the automation quickly in a light weight language.
The solution for your project may be different, if you require enterprise support, headless servers, or if you plainly prefer to stick to your language of choice, because of your available skill set, then the right tool for your circumstance will be different. The most important ingredient here is Selenium, or in this case Watir, which is a Ruby flavor of Selenium.
This is the Browser automation API that allows you to control your browser programatically
" Selenium automates browsers. That's it! What you do with that power is entirely up to you. Primarily, it is for automating web applications for testing purposes, but is certainly not limited to just that. Boring web-based administration tasks can (and should!) also be automated as well."
Chrome, Firefox, IE...
Remote control drivers
The crawler can be run against a thin client, or against a local thick Client running Open UI.
Knowing Ruby isn't a prerequisite to understand this concept. Ruby is a pretty readable language, and I've added a good dosage of comments to explain what the code is doing so those from non Ruby backgrounds can follow.
#import required libraries require 'rubygems' require 'watir-webdriver' require 'cgi' #Parametise the URL that will be used to navigate to the Open UI application #This should really be configured for a server URL for real testing #but im using a local URL for this example sURL = "http://localhost/start.swe?" sPass = nil sUser = nil #Open a new chrome browser session browser = Watir::Browser.new :chrome #navigates to the URL defined above browser.goto sURL #maximise the window so we can see all entire view browser.window.maximize #This block simulates the user login process for thin client connection #else it is bypassed for the thick client if sUser!=nil && sPass!=nil #fill in username in the user name field username = browser.text_field(:name, "SWEUserName") username.set sUser if username!=nil #fill in password in the password field password = browser.text_field(:name, "SWEPassword") password.set sPass if password!=nil sleep 1 #click the login button browser.link(:text =>"Login").when_present.click end sleep 5
Congratulations!. At this point we have just created a simple crawler that has logged into an Open UI Application.
Next we want to tell it how to navigate the application, by going to the sitemap, read all the views, and parse all the JS links embedded in the view elements. This is done pretty easily in Ruby.
#use CSS to locate sitemap icon, and click it browser.element(:css => "li[name=SiteMap] > img").when_present.click #use CSS selector to read all the views in sitemap into an array #these links also contain an onclick attr that allows to emulate a view navigation links =browser.css("span[class='viewName'] > a")
Finally, we loop through every link, execute the JS code to perform the navigation, and optionally perform any automation.
links.each do |link| params = CGI.parse(link.attr("onclick")) sleep 5 #Goto view browser.execute_script( link.attr("onclick") ) end
Thats as simple as it needs to be.
To fire it up with a dedicated client, you'll first need to open up an existing session to start the local web server. When the above code is run, it will launch a new browser connecting to the existing session.
For thin client connections, it is not necessary to pre-launch the session, the above code will instantiate a new browser session, and will login in normally.
A crawler like the program above, can navigate the application, verify each view and flag views that have issues.
Building a simple crawler is a easy, and is something that you can give to an energetic graduate to perform in a day, however if you require a crawler that can be a workhorse for your Continuous Integration strategy, then it needs to be a little more robust, and more scalable than the simple example above, or if your needs warrant a more specialized crawler, that can check for and enforce WCAG compliance in your application, then you'll ideally need a plugin system that can allow you to easily add new validators. If want to go a little further, and build in functional testing capabilities, then could build an API bridge to facilitate a DSL to build more readable test cases.
Open UI + selenium opens up these exciting opportunities. For my client, having a nightly build process + continuous integration, and WCAG reporting, ensures they have a strict standards compliant UI and a more stable application for every deploy.
This article provides the necessary ingredients, and a simple recipe for other Siebel customers to follow the same path.
Selenium Web driver
More on Open UI
Serious Open UI developers should be on the watch, for the up comming Open UI book from some very distinguished authors
Siebel Open UI Developer's Handbook