Automate iPadOS Split View Multitasking With Appium

      iPad Pros run a slightly different version of iOS called iPadOS, and this version of iOS comes with several really useful features. One of the favorite is the ability to run two apps side-by-side. This is called Split View Multitasking by Apple, and getting it going involves a fair bit of gestural control for the user. Here’s how a user would turn on Split View:

  1. Open the first app they want to work with
  2. Show the multitasking dock (using a slow short swipe up from the bottom edge)
  3. Touch, hold, and drag the icon of the second app they want to work with to the right edge of the screen

      From this point on, the two apps will be conjoined in Split View until the user drags the app separator all the way to the edge of the screen, turning Split View off. Of course, both apps must be designed to support split view for this to work. 

      Let’s now discuss how we can walk through this same series of steps with Appium to get ourselves into Split View mode, and further be able to automate whichever app of the two we desire. Unfortunately, there’s no single command to make this happen, and we have to use a lot of tricky techniques to mirror the appropriate user behavior. Basically, we need to worry about these things:

  1. Ensuring both apps have been opened recently enough to show up in the dock
  2. Executing the correct gestures to show the dock and drag the app icon to trigger Split View
  3. Telling Appium which of the apps in the Split View we want to work with at any given moment

We’re going to describe how to achieve above steps.

Ensuring applications are in the dock

      For our strategy to work, we need the icon of the app we want to open in Split View in the dock. The best way to make this happen is to ensure that it has been launched recently in fact, most recently apart from the currently-running app. Let’s take a look at the setup for an example where we’ll load up both Reminders and Photos in Split View. In our case, we’ll want Reminders on the left and Photos on the right. Because we’re going to open up Photos on the right, we’ll actually launch it first in our test, so that we can close it down, open up Reminders, and then open up Photos as the second app.

DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.setCapability(“platformName”, “iOS”);
capabilities.setCapability(“platformVersion”, “13.3”);
capabilities.setCapability(“deviceName”, “iPad Pro (12.9-inch) (3rd generation)”);
capabilities.setCapability(“app”, PHOTOS);
capabilities.setCapability(“simulatorTracePointer”, true);
driver = new IOSDriver(new URL(” http://localhost:4723/wd/hub “), capabilities);
wait = new WebDriverWait(driver, 10);
size = driver.manage().window().getSize();

      In this setUp method, we also construct a WebDriverWait, and store the screen dimensions on a member field, because we’ll end up using them frequently. When we begin our test, the Photos app will be open. What we want to do next is actually terminate Photos, and launch Reminders. At this point, we’ve launched both the apps we want to work with, so they are both the most recently-launched apps, and will both show up in the recent apps section of the dock. Then, we go back to the Home Screen, so that the dock is visible:

// terminate photos and launch reminders to make sure they’re both the most recently launched apps
driver.executeScript(“mobile: terminateApp”, ImmutableMap.of(“bundleId”, PHOTOS));
driver.executeScript(“mobile: launchApp”, ImmutableMap.of(“bundleId”, REMINDERS));

// go to the home screen so we have access to the dock icons
ImmutableMap pressHome = ImmutableMap.of(“name”, “home”);
driver.executeScript(“mobile: pressButton”, pressHome);

      In the next step of this flow, we figure out where the Photos icon is, and save that information for later. Then we re-launch Reminders, so that it is active and ready to share the screen with Photos.

// save the location of the icons in the dock so we know where they are //when we need to drag them later, but no longer have access to them as //elements
Rectangle photosIconRect = getDockIconRect(“Photos”);

// relaunch reminders
driver.executeScript(“mobile: launchApp”, ImmutableMap.of(“bundleId”, REMINDERS));

      There is an interesting helper method here. getDockIconRect just takes an app name, and returns the position of its dock icon in the screen:

protected Rectangle getDockIconRect(String appName) {
By iconLocator = By.xpath(“//[@name=’Multitasking Dock’]//[@name='” + appName + “‘]”);
WebElement icon = wait.until(
ExpectedConditions.presenceOfElementLocated(iconLocator));
return icon.getRect();
}

      Here we use an xpath query to ensure that the element we retrieve is actually the dock icon and not the home screen icon. Then, we return the screen rectangle representing that element, so that we can use it later.

Showing the dock and entering into Split View

      At this point we are ready to call a special helper method designed to slowly drag the dock up in preparation for running the Split View gesture:

// pull the dock up so we can see the recent icons, and give it time to settle
showDock();
Thread.sleep(1000);

protected void showDock() {
swipe(0.5, 1.0, 0.5, 0.92, Duration.ofMillis(1000));
}

      We are using showDock method to perform a slow swipe from the middle bottom of the screen, up just far enough to show the dock. Now that the dock is shown, we can actually enter Split View. To do that, we make use of a special iOS-specific method mobile: dragFromToForDuration, which enables us to perform a touch-and-hold on the location of the Photos dock icon, then drag it to the right side of the screen. We wrap this up in a helper method called dragElement. Below is the implementation:

// now we can drag the photos app icon over to the right edge to enter split view, also give it a bit of time to settle
dragElement(photosIconRect, 1.0, 0.5, Duration.ofMillis(1500));
Thread.sleep(1000);

protected void dragElement(Rectangle elRect, double endXPct, double endYPct, Duration duration) {
Point start = new Point((int)(elRect.x + elRect.width / 2), (int)(elRect.y + elRect.height / 2));
Point end = new Point((int)(size.width * endXPct), (int)(size.height * endYPct));
driver.executeScript(“mobile: dragFromToForDuration”, ImmutableMap.of(“fromX”, start.x, “fromY”, start.y, “toX”, end.x, “toY”, end.y, “duration”, duration.toMillis() / 1000.0));}

      Essentially, we take the rect of a dock icon, pass in the ending x and y coordinate percentages, and the duration of the “hold” portion of the gesture. The dragElement helper converts these to the appropriate coordinates, and calls the mobile: method.

Working with simultaneously open applications

      At this stage in our flow, we’ve got both apps open in Split View! But if we take a look at the page source, we’ll find that we only see the elements for one of the apps. And in fact, we can only work with one app’s elements at a time. We can, however, tell Appium which app we want to work with, by updating the defaultActiveApplication setting to the bundle ID of whichever app you want to work with:

driver.setSetting(“defaultActiveApplication”, PHOTOS);
wait.until(ExpectedConditions.presenceOfElementLocated(MobileBy.AccessibilityId(“All Photos”)));
driver.setSetting(“defaultActiveApplication”, REMINDERS);
wait.until(ExpectedConditions.presenceOfElementLocated(MobileBy.AccessibilityId(“New Reminder”)));

      In the code above, you can see how we call driver.setSetting, with the appropriate setting name and bundle ID. After doing this for a given app, we can find elements within that app, and of course we can switch to any other app if we want as well.

      So that’s the way we can enter into a Split View and automate each application on the screen. Try to utilize above capabilities in your iPadOS automation.

Reference: Appium Pro

make it perfect!

Execute Your Arbitrary ADB Commands with Appium

      If you’re not a big Android person, you might not know about ADB, the “Android Debug Bridge”. ADB is a powerful tool provided as part of the Android SDK by Google, that allows running all sorts of interesting commands on a connected emulator or device. One of these commands is adb shell, which gives you shell access to the device filesystem (including root access on emulators or rooted devices). adb shell is the perfect tool for solving many problems.

      Appium did not allow running of arbitrary ADB commands. This is because Appium was designed to run in a remote environment, possibly sharing an OS with other services or Appium servers, and potentially many connected Android devices. It would be a huge security hole to give any Appium client the full power of ADB in this context. Recently, the Appium team decided to unlock this functionality behind a special server flag, so that someone running an Appium server could intentionally open up this security hole. This is achieved using –relaxed-security flag. So you can now start up Appium like this to run arbitrary ADB commands,

appium –relaxed-security

      With Appium running in this mode, you have access to a new “mobile:” command called “mobile: shell“. The Appium “mobile:” commands are special commands that can be accessed using executeScript (at least until client libraries make a nicer interface for taking advantage of them). Here’s how a call to “mobile: shell” looks in Java:

driver.executeScript(“mobile: shell”, arg);

arg needs to be a JSONifiable object with two keys:

  • command: a String, the command to be run under adb shell.
  • args: an array of Strings, the arguments passed to the shell command.

      For example, let’s say we want to clear out the pictures on the SD card, and that on our device, these are located at /mnt/sdcard/Pictures. If we were running ADB on our own without Appium, we’d accomplish our goal by running:

adb shell rm -rf /mnt/sdcard/Pictures/*.*

      To translate this to Appium’s “mobile: shell” command, we simply strip off adb shell from the beginning, and we are left with rm -rf /mnt/sdcard/Pictures/*.*

      The first word here is the “command”, and the rest constitute the “args”. So we can construct our object as follows:

List removePicsArgs = Arrays.asList(“-rf”, “/mnt/sdcard/Pictures/.”);
Map removePicsCmd = ImmutableMap.of(“command”, “rm”, “args”, removePicsArgs);
driver.executeScript(“mobile: shell”, removePicsCmd);

      We can also retrieve the result of the ADB call, for example if we wish to verify that the directory is now indeed empty:

List lsArgs = Arrays.asList(“/mnt/sdcard/Pictures/.”);
Map lsCmd = ImmutableMap.of(“command”, “ls”,”args”, lsArgs);
String lsOutput = (String) driver.executeScript(“mobile: shell”, lsCmd);
Assert.assertEquals(“”, lsOutput);

      The output of our command is returned to us as a String which we can do whatever we want with, including making assertions on it.

      In this article, I hope you got the power of ADB through a few simple file system commands. You can actually do many more useful things than delete files with ADB, so go out there and have fun with it in your automation flow.

Reference: Appium Pro

make it perfect!

Android 11 Highlights

       We know that Android 11 was released on September 8th, 2020. Android 11 is optimized for how you use your phone. Giving you powerful device controls. And easier ways to manage conversations, privacy settings and so much more. Let’s see the Android 11 Highlights,

  • Manage your conversations: Get all your messages in one place.See, respond to and control your conversations across multiple messaging apps. All in the same spot. Then select people you always chat with. These priority conversations show up on your lock screen. So you never miss anything important. With Android 11, you can pin conversations so they always appear on top of other apps and screens. Bubbles keep the conversation going—while you stay focused on whatever else you’re doing. Nearby Share helps quickly and securely send files, videos, map locations and more to devices nearby. Works with Android devices, Chromebooks or devices running Chrome browser.
  • Capture and share content: Screen recording lets you capture what’s happening on your phone. And it’s built right into Android 11, so you don’t need an extra app. Record with sound from your mic, your device or both. Select text from your apps. Grab images too. On Pixel devices, you can easily copy, save and share info between many apps. Like your browser, your delivery app or from the news.
  • Helpful tools that predict what you want: It support Smart reply that is get suggested responses in conversations. App suggestions feature provides facility to easily get to apps you need most. Smart folders provides smarter ways to organize your apps.
  • Control your phone with your voice: With Android 11, Voice Access is faster and easier to use. Intuitive labels on apps help you control and navigate your phone, all by speaking out loud. Even use Voice Access offline, for more support whenever you need it.
  • Accessibility: Lookout: Lookout now has two new modes. Scan Document and Food Label help people with low vision or blindness get things done faster and more easily. Opening Lookout also turns on your flashlight, helping users read in low light. And Lookout is now available on all 2GB+ devices running Android 6.0 or later.
  • 5G detection API and Ethernet tethering: With new APIs, apps know if you’re on a 5G connection. So you get better performance. Share a tethered internet connection with a USB ethernet dongle.
  • Digital Well-being: Bedtime Mode quiets your phone when it’s time to go to sleep. Schedule it to run automatically or while your phone charges as you rest. Your screen switches to grayscale and your notifications go silent with Do Not Disturb. The new bedtime feature in Clock helps you set a healthy sleep schedule. Track screen time at night and fall asleep to calming sounds. Then wake up to your favorite song. Or use the Sunrise Alarm that slowly brightens your screen to start the day.
  • Enterprise: Get full privacy from IT on your work profile on company-owned devices. Plus new asset management features for IT to ensure security without visibility into personal usage. Connect work and personal apps to get a combined view of your information in places like your calendar or your reminders. Easily disconnect from work. With Android 11, you can now set a schedule to automatically turn your work profile on and off. Use the work tab in more places to share and take actions across work and personal profiles. See work tabs when sharing, opening apps and in settings. Get a new notification if your IT admin has turned on location services on your managed device.
  • Device Controls: Control your connected devices from one place. Set the temperature to chill, then dim your lights. All from a single spot on your phone.3 Just long press the power button to see and manage your connected devices. Making life at home that much easier.
  • Media Controls: Switch from your headphones to your speaker without missing a beat. Tap to hear your tunes or watch video on your TV. With Android 11, you can quickly change the device that your media plays on.
  • Connect Android to your car. Skip the cable: Hit the road without plugging in. Android Auto now works wirelessly with devices running Android 11—so you can bring the best of your phone on every drive.
  • Privacy and Security: You control what apps can access. Take charge of your data with Android. You choose whether to give apps you download permission to access sensitive data. Or not. So you stay better protected. Give one-time permissions to apps that need your mic, camera or location. The next time the app needs access, it must ask for permission again. If you haven’t used an app in a while, you may not want it to keep accessing your data. So Android will reset permissions for your unused apps. You can always turn permissions back on. With Android 11, you get even more security and privacy fixes sent to your phone from Google Play. The same way all your other apps update. So you get peace of mind. And your device stays armed with the most recent defense.

Thanks for spending your time here to read this article and know about the new features of Android 11.

Reference: android.com

make it perfect!

Explore and Learn iOS 14 Features

 

       We know that iOS 14.0 beta 5 released on Aug 18th, 2020. The following are the expected new features of iOS 14. Try to learn more about the features and the areas that we need to think from a developer or tester perspective. Now, we will discuss the iOS 14 new features below,

  • A new home screen with the App Library: Apple is finally changing the iOS home screen! With iOS 14, you’ll be able to actually remove apps from your home screens, and even eliminate entire screens. It is a great way to clean up your iPhone’s home screen without losing access to all your stuff, and it is the most significant change to the iPhone’s home screen in years.
  • New Widgets on the Today view and home screen: With iOS 14, Apple will completely overhaul the widgets experience. The new widgets can have more information and a bunch of new sizes, but most importantly, they can be dragged right off the Today view and onto your home screen. A single “Smart Stack” widget lets you swipe through your commonly used widgets, and can even be set to automatically show you the widget you’re most likely to need throughout the day.
  • A whole new Siri interface: Siri’s full-screen takeover will finally become a thing of the past. When you trigger Siri in iOS 14, it will simply show the Siri “blob” at the bottom of your display, and a lot of the results will show as a rich notification at the top of your screen.
  • Picture-in-picture: Once only available on iPad, picture-in-picture mode is finally coming to iPhone with iOS 14. When watching a video or talking on a FaceTime call, you can swipe back to the home screen and the video will continue to play in a little box, allowing you to keep using your iPhone for other things.
  • App Clips lets you use mini-apps on the spot: Apple is introducing a whole new class of application called App Clips. These are little micro parts-of-apps that allow you to use specific Apps without having to download, install, and sign in to a big app to do one simple thing. A developer creates an App Clip when they make their app, making sure the experience is under 10MB in size so it downloads and opens quickly. Developers are encouraged to use Sign In with Apple and Apple Pay so you don’t need to log in or create accounts. App Clips will show in the App Library, and show an app icon surrounded by a dotted line. You can re-access the App Clip this way, or easily download the full app.
  • Major Messages improvements: Messages is arguably the most important mobile app in Apple’s arsenal. Apple is adding some big features to Messages across iOS, iPadOS, and macOS. You can now pin up to nine conversations, keeping them at the top of your Messages stack. That is a relief to anyone who has a lot of different conversations going, or just gets a lot of two-factor authentication codes over SMS.
  • Memoji updates: There are seven new hairstyles, 16 new pieces of headwear, three new memoji stickers, face coverings and an expanded range of ages. Memoji have been refined with new facial and muscle structure to make them more expressive, too.
  • Maps improvements: The Maps app will help you find places to visit in major cities with a new Guides feature. Apple is working with major third-party travel companies to provide guides to landmarks, sightseeing, restaurants, hotels, shopping, and other activities.
  • Camera improvements: The Quick Take video mode enjoyed by the iPhone 11 (press and hold the shutter button in Photo mode to take a video) will come to iPhone XR and iPhone XS. And all iPhones will get the ability to change video resolution and frame rate in the Camera app, rather than digging into the Settings app. In addition to the single-shot exposure lock, there is now an exposure compensation slider that lasts for an entire session whether taking photos or videos.
  • Default email and browser apps: With iOS 14, you’ll be able to designate third-party apps to be your default email or browser. You can currently run other browsers and email apps on iPhone, but when you open a link or email address, it will still open Safari or Mail. With iOS 14, when you click on a link or email address, it will open the app of your choosing instead.
  • FaceTime improvements: Do you remember how iOS 13 had a neat feature where it would tweak your eyes to make it look like you were looking at the camera, instead of down at your display? That “eye contact” feature never ended up shipping, but it’s back in iOS 14.
  • Keyboard tweaks: The dictation feature is now better and works entirely on your device, but that is not the big-ticket item. The big-ticket item is that the emoji picker now has a search bar, just like it does on the Mac.
  • Privacy enhancements: You can give apps your approximate location instead of your exact location (perfect for a weather or sports app), for example. When an app asks to access your photos, you can select specific photos to give it access to, instead of your whole library. While an app is accessing your camera, a little green dot will show in the status bar. There’s an amber dot for when your microphone is accessed. Because some apps ask for permission to access your camera or microphone for legit reasons, but then watch or listen to you when you’re not expecting it.
  • The Translate app: Apple’s got a new first-party app called Translate, and it’s basically the Apple version of the popular Google Translate app. Just pick two languages, hit the microphone button, and the app will listen to your voice and provide text and voice translations. You can even download many languages to your device and it’ll work entirely offline.
  • Apple Arcade: Apple is updating the Arcade tab in iOS 14 to show you the games your Game Center friends are playing, quickly access games you played recently (even if it was on a different device), and make it easier to find and sort through all the Arcade games.
  • ARKit 4: Apple keeps expanding its augmented reality tools for developers, even if it isn’t making a lot of impressive features for users yet. ARKit 4 tools let developers place Location Anchors so an AR object can occupy a specific place in the real world. Like a virtual sculpture in a public square. There is a new Depth API that lets developers build 3D mesh environments and per-pixel depth information on the latest iPad Pro with its LiDAR scanner. Presumably these features will be important on future iPhone Pro models, too.
  • Safari improvements: Safari is faster than ever in iOS 14, and more secure, too. You can access privacy reports for websites, and Apple will monitor saved passwords to see if any have been involved in recent data breaches.
  • CarPlay: Apple’s car interface has a host of small but welcome improvements. There are new categories of apps allowed, including EV charging, parking, and food ordering apps.
  • Car Keys:  Apple’s one of the first to bring a standardized version of digital car keys to your iPhone.

Thanks for spending your time here to read this article and know about the new features of iOS 14.

Reference: MACWORLD

make it perfect!

 

Play with Jenkins Pipeline

Jenkins pipeline

      We know that CI/CD is one of the best practices for DevOps teams to implement. It is also an agile methodology best practice, as it enables software development teams to focus on meeting business requirements, code quality, and security because deployment steps are automated.

    Continuous Integration is a coding philosophy and set of practices that drive development teams to implement small changes and check in code to version control repositories frequently. Because most modern applications require developing code in different platforms and tools, the team needs a mechanism to integrate and validate its changes. The technical goal of CI is to establish a consistent and automated way to build, package, and test applications.

     Continuous Delivery picks up where continuous integration ends. Continuous Delivery automates the delivery of applications to selected infrastructure environments. Most teams work with multiple environments other than the production, such as development and testing environments, and Continuous Delivery ensures there is an automated way to push code changes to them.

       CI/CD tools help to store the environment specific parameters that must be packaged with each delivery. CI/CD automation then performs any necessary service calls to web servers, databases, and other services that may need to be restarted or follow other procedures when applications are deployed. In this article I would like to share more on CI/CD Pipeline  using the tool Jenkins.

      Jenkins is a open source Continuous Integration/Continuous Delivery tool. It is used to integrate and automate your product development and testing processes. The purpose of using this tool was to build and test project continuously. In an ascent of agile, this could help developers to integrate the changes to the project as quickly as possible and obtain fresh builds ready for testing. Jenkins is very flexible and provides most of plugins support for integrations and also it is very easy to setup/configure. One of the important part of Jenkins tool is Jenkins Build Pipeline. This gives an overview about various jobs running on builds after commits made by developers. It tells about which tasks, Jenkins is currently executing.

      As name mentioned Pipeline, this will connect one section to another to achieve the goal. In Jenkins Build Pipeline the build can be seen as segmented into sections such as the compile the code, code review, unit test, packaging and deployment phases. These phases can be executed either in series or parallel. If one phase is successful, it automatically moves on to the next phase. Here, we will see how the Jenkins Build Pipeline happening for following:

  • Jenkins Build Pipeline within Development Project.
  • Jenkins Build Pipeline between Development Project and Test Automation Project.

Jenkins Build Pipeline within Development Project

     In this case, I have a development project (created in Maven) and committed the code into a version control system (Git or Bitbucket). I have committed the latest code into the version control system, next integrating the repository into Jenkins, defining tasks in Jenkins, creating pipeline for different phases includes compile code, code review, unit testing and packaging. These phases will execute one after another in Jenkins. Below are the detailed steps:

Step 1: Start Jenkins server and once the server ready, open the Jenkins Dashboard (make sure that all the suggested plugins installed).

Step 2: Create a Freestyle Project “1_compile” for compile the development project.

  • Map the development project repository at Source Code Management section.
  • At Build section, select Invoke top-level Maven targets option and enter Goal as compile. Click Apply and Save.

Step 3: Create a Freestyle Project “2_code review” for code review.

  • Map the same development project repository at Source Code Management section.
  • At Build Triggers section, select Build after other projects are built option and enter project name 1_compile in Projects to watch field and select Trigger only if build is stable option. Project 2_code review will trigger after successful execution of  1_compile.
  • At Build section, select Invoke top-level Maven targets option and enter Goal as -P metrics pmd:pmd. Here I used PMD plugin for static code analyze (it’s deprecated one you can use Warnings Next Generation plugin). Click Apply and Save.

Step 4: Create a Freestyle Project “3_unit test” for unit testing the development project.

  • Map the same development project repository at Source Code Management section.
  • At Build Triggers section, select Build after other projects are built option and enter project name 2_code review in Projects to watch field and select Trigger only if build is stable option. Project 3_unit test will trigger after successful execution of  2_code review.
  • At Build section, select Invoke top-level Maven targets option and enter Goal as test. Click Apply and Save.

Step 5: Create a Freestyle Project “4_packaging” to generate build.

  • Map the same development project repository at Source Code Management section.
  • At Build Triggers section, select Build after other projects are built option and enter project name 3_unit test in Projects to watch field and select Trigger only if build is stable option. Project 4_packaging will trigger after successful execution of  3_unit test.
  • At Build section, select Invoke top-level Maven targets option and enter Goal as package. Click Apply and Save.

jenkins_1

Step 6: Now all the jobs are ready to map to the pipeline operation.

  • Refer above figure, click on plus (+) icon. Enter View name and select Build Pipeline View option. Click OK. If you are not finding the Build Pipeline View option then  you need to install the Build Pipeline plugin in Jenkins.
  • Click Apply and OK. You will get below dashboard view to run the pipeline before the execution,

jenkins_2

  • Once complete the execution, you will get below view,

jenkins_3

Step 7: Now your pipeline is ready to trigger your various phases of development and deployment includes compile, code review, unit test and packaging.

Jenkins Build Pipeline between Development Project and Test Automation Project

     In this case, I have a development project (created in Maven) and also a test automation project (created in Maven). For both projects, the latest code committed into their respective repository in the version control system (Git or Bitbucket). Next integrating the repositories into Jenkins, defining tasks in Jenkins, creating pipeline in Jenkins. Below are the detailed steps to see how the test automation scripts will trigger once the build generated from the development project using pipeline:

Step 1: Start Jenkins server and once the server ready, open the Jenkins Dashboard (make sure that all the suggested plugins installed).

Step 2: Create a Freestyle Project “1_dev_project” to generate build for subsequent test automation script execution.

  • Map the development project repository at Source Code Management section.
  • At Build section, select Invoke top-level Maven targets option and enter Goal as package. Click Apply and Save.

Step 3: Create a Freestyle Project “2_test_project” to execute the test automation script against new build generated from 1_dev_project.

  • Map the test automation project repository at Source Code Management section.
  • At Build Triggers section, select Build after other projects are built option and enter project name 1_dev_project in Projects to watch field and select Trigger only if build is stable option. Project 2_test_project will trigger after successful execution of  1_dev_project.
  • At Build section, select Invoke top-level Maven targets option and enter Goal as clean test. Click Apply and Save.

jenkins_4

Step 4: Now all the jobs are ready to map to the pipeline operation.

  • Refer above figure, click on plus (+) icon right to All. Enter View name and select Build Pipeline View option. Click OK. If you are not finding the Build Pipeline View option then  you need to install the Build Pipeline plugin in Jenkins.
  • Click Apply and OK. You will get below dashboard view to run the pipeline before the execution,

jenkins_5

  • Once complete the execution, you will get below view. My automation test project failed due to expected element not found in the application.

jenkins_6

Step 5: Now your pipeline is ready to trigger the test automation script once after the build successfully generated from the development project.

      I hope you really enjoyed with the Jenkins and Jenkins Pipeline within development project and also the pipeline between development and test automation projects. Try to implement the Jenkins pipeline from your end and enjoy your DevOps strategy.

make it perfect!