Accessibility Testing Approaches – Part II

      In the previous article, we discussed about Accessibility Testing basics, importance of Accessibility Testing, main focus areas while doing Accessibility Testing, Accessibility Testing using Axe browser extension and Axe command line. Also, we discussed basic about Axe core library to perform the accessibility testing and identify the violations. In this article, we will discuss more about Axe core library and the implementations.

Axe Core Library

      This is one of the advanced approach to perform the accessibility testing and identify the violations. In this approach we can use Axe core library and automate the accessibility testing in between your functional regression automation. In the approach we are using the selenium dependency of com.deque.html.axe-core, and analyze method of AxeBuilder class. Once you create Accessibility Testing suite, you can configure in CI/CD pipelines for continuous testing.


      Following are the prerequisites to start Accessibility Testing,

  • Install Java SDK 8 and above.
  • Install Eclipse or IntelliJ IDEA IDEs.
  • Following maven dependencies:
    • testng
    • selenium-java
    • selenium-server
    • selenium from the group com.deque.html.axe-core
    • jackson-databind
    • jackson-dataformat-csv
    • poi-ooxml-schemas
    • poi-ooxml
    • poi
Step-by-step approach:

Step 1: Create a maven project and add the above dependencies in pom.xml of the project.

Step 2: Create a Java class AccessibilityTestHelper to keep the logic to track violations. Implement following methods:

  • trackViolations – this is a public method that can be used in your test classes to track the violations. In this method we used analyze method of AxeBuilder class to identify the violations, after that create an excel file if not exist with help of createExcelFile method. Once the file gets created, writing the violation details (violation id, description, impact, help, help URL, and WCAG tags) into the file using writeToExcel method. Also there are logic to track violations in JSON and text files. The actual implementations are below:
 * Method to track the violations using AxeBuilder support
 * @author sanojs
 * @param driver
 * @param pageName
public void trackViolations(WebDriver driver, String pageName) {
	Results violationResults;
	try {
		violationResults = new AxeBuilder().analyze(driver);
		if (!new File(System.getProperty("user.dir") + "\\Results").exists()) {
			(new File(System.getProperty("user.dir") + "\\Results")).mkdir();
		int j = 2;
		String filePath = System.getProperty("user.dir") + "\\Results\\AccessibilityTestReport.xlsx";
		for (int i = 0; i < violationResults.getViolations().size(); i++) {
			writeToExcel(filePath, pageName, 1, j, violationResults.getViolations().get(i).getId());
			writeToExcel(filePath, pageName, 2, j, violationResults.getViolations().get(i).getDescription());
			writeToExcel(filePath, pageName, 3, j, violationResults.getViolations().get(i).getImpact());
			writeToExcel(filePath, pageName, 4, j, violationResults.getViolations().get(i).getHelp());
			writeToExcel(filePath, pageName, 5, j, violationResults.getViolations().get(i).getHelpUrl());
			writeToExcel(filePath, pageName, 6, j, violationResults.getViolations().get(i).getTags().toString());
				System.getProperty("user.dir") + "\\Results\\" + pageName + "_" + getCurrentDateAndTime(),
				System.getProperty("user.dir") + "\\Results\\" + pageName + "_" + getCurrentDateAndTime(),
	} catch (Exception e) {
  • createExcelFile – this is a private method that helps to create an excel file if not exists. The file will be created at the project level. The actual implementations are below:
 * Method to create a new excel file
 * @author sanojs
 * @param filePath
 * @return
private XSSFWorkbook createExcelFile(String filePath) {
    XSSFWorkbook workbook = null;
    try {
        File fileName;
        FileOutputStream fos = null;

        File file = new File(filePath);
        if (!file.exists()) {
            fileName = new File(filePath);
            fos = new FileOutputStream(fileName);
            workbook = new XSSFWorkbook();
            XSSFSheet sheet = workbook.getSheetAt(0);
            Row header = sheet.createRow(0);
            header.createCell(0).setCellValue("Accessibility Testing Report");
            XSSFCellStyle style = workbook.createCellStyle();
            style.setBorderTop((short) 6);
            style.setBorderBottom((short) 2);
            style.setBorderRight((short) 2);
            XSSFFont font = workbook.createFont();
            font.setFontHeightInPoints((short) 15);

            Row row = sheet.getRow(0);
            row = sheet.getRow(1);
            if (row == null)
                row = sheet.createRow(1);
            Cell cell = row.getCell(0);
            if (cell == null)
                cell = row.createCell(0);
            cell.setCellValue("Please go through following tabs to know the violations");
    } catch (Exception e) {
    return workbook;
  • writeToExcel – this is a private method that helps to write the violations into the created excel file. The actual implementations are below:
 * Method to write the violations in a excel report
 * @author sanojs
 * @param sheetName
 * @param columnIndex
 * @param rowNum
 * @param data
 * @return
private boolean writeToExcel(String filePath, String sheetName, int columnIndex, int rowNum, String data) {
    InputStream in = null;
    XSSFWorkbook wb = null;
    Sheet newSheet = null;
    FileOutputStream fileOutStream = null;
    try {
        if (filePath == null || filePath.trim().equals(""))
            throw (new Exception());
        if (filePath.endsWith(".xlsx") || filePath.endsWith(".xls")) {
            in = new FileInputStream(filePath);
            wb = new XSSFWorkbook(in);
        } else {
            throw (new Exception());
        if (rowNum <= 0)
            throw (new Exception());
        try {
            newSheet = wb.getSheet(sheetName);
            if (newSheet != null) {
                throw new Exception("Sheet Already exist...");
            newSheet = wb.createSheet(sheetName);
        } catch (Exception e) {
        int index = wb.getSheetIndex(newSheet);
        if (index == -1)
            throw (new Exception());
        XSSFCellStyle style = wb.createCellStyle();
        style.setBorderTop((short) 6);
        style.setBorderBottom((short) 1);
        XSSFFont font = wb.createFont();
        font.setFontHeightInPoints((short) 15);
        XSSFSheet sheet = wb.getSheetAt(index);
        Row header = sheet.createRow(0);
        header.createCell(0).setCellValue("Violation ID");
        header.createCell(1).setCellValue("Violation Description");
        header.createCell(2).setCellValue("Violation Impact");
        header.createCell(3).setCellValue("Violation Help");
        header.createCell(4).setCellValue("Violation Help URL");
        header.createCell(5).setCellValue("Violation Issue Tags");
        for (int j = 0; j <= 5; j++)
        Row row = sheet.getRow(0);
        if (columnIndex < 1)
            throw (new Exception());
        sheet.autoSizeColumn(columnIndex - 1);
        row = sheet.getRow(rowNum - 1);
        if (row == null)
            row = sheet.createRow(rowNum - 1);
        Cell cell = row.getCell(columnIndex - 1);
        if (cell == null)
            cell = row.createCell(columnIndex - 1);
        fileOutStream = new FileOutputStream(filePath);
    } catch (Exception e) {
        return false;
    } finally {
        try {
        } catch (Exception e) {
    return true;
  • getCurrentDateAndTime – this is a private method that helps to generate current timestamp that used suffix to the JSON and text files. The actual implementations are below:
 * Method to get the current date and time
 * @author sanojs
 * @return
private String getCurrentDateAndTime() {
    DateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd");
    DateFormat timeFormat = new SimpleDateFormat("HH-mm-ss");
    Date date = new Date();
    String currdate = dateFormat.format(date);
    String currtime = timeFormat.format(date);
    return currdate + "_" + currtime;

Step 3: Create a TestNG class SampleAccessibilityTest with logic to start the driver session and create a test case let say accessibilityTesting. In the test case, we just need to load the browser that you need to perform the Accessibility Testing and after that call the method trackViolations. The actual implementations are below:

public void setUp() throws InterruptedException {
try {
    ChromeDriverService chromeservices = new ChromeDriverService.Builder()
                .usingDriverExecutable(new File("path to your driver executable")).usingAnyFreePort().build();
    driver = new ChromeDriver(chromeservices);

    } catch (WebDriverException e) {
public void accessibilityTesting() {
try {
    new AccessibilityTestHelper().trackViolations(driver, "Home Page");

    } catch (Exception e) {

Step 4: Execute above test case and see the reports in the format of excel, JSON and text files. These reports will be generated in Results folder of the project structure. Based on the above trackViolations method a sheet Home Page will be created in the excel report track all the violations. If you are using trackViolations method for different pages of your web application, it will create new sheets to track the violations of your different pages. Below is the project structure:

      Accessibility Testing is one of the important types of testing that add value to your business and deliver user friendly applications. Axe Core is a very powerful framework that can help the team to build web products that are inclusive. In this article we discussed about different ways to test the Accessibility and also the automation part. I hope everyone enjoyed with the concept of testing the Accessibility and the implementations. Please try to utilize this opportunity in your testing world.    

make it perfect!

Accessibility Testing Approaches – Part I

      We know that Accessibility testing is the practice of making the web and mobile apps usable to as many people as possible. It makes apps accessible to those with disabilities, such as vision impairment, hearing disabilities, and other physical or cognitive conditions. We have to perform the accessibility testing to meet the needs of all users. It’s also the law between the Web Content Accessibility Guidelines (WCAG), Section 508, and the Americans with Disabilities Act (ADA), you have plenty of regulations to meet. We need to ensure that the applications work with screen readers, speech recognition software, screen magnification, and more. In this article, we will discuss about what need to be tested as part of accessibility, importance of accessibility testing, how we can achieve accessibility testing using AXE for web applications.

Importance of Accessibility Testing

      In Accessibility Testing, we not only check the usability, but we also check how the application would be used by the people with Visual, Auditory, Motor, Cognitive and Speech Disabilities. Accessibility Testing is important for businesses to enable and make their critical web applications and mobile applications to be easily accessible even to people with disabilities. As per the publishing of Web Accessibility testing initiative, the WCAG aims to make the website easily understandable, accessible, and usable on all websites. Also, WCAG is a definitive guideline that should be followed by businesses during the website development which is achieved by leveraging accessibility test, as this helps in many ways such as:

  • To make the website easily accessible for users with challenges or disabilities.
  • To attract the users and increase the company market share.
  • To be accessible for users with low-bandwidth.
  • To make the website and the information available for the users across regions

What need to be tested in Accessibility?

      In the Accessibility Testing, we have to focus mainly on following areas,

  • Text contrast, also ratio between text or images and background color.
  • Hit area size.
  • View hierarchy of UI, determines how easy the Android app is to navigate.
  • Dynamic font size.
  • HTML validation
  • Headings in the application
  • Alternate text in the images
  • Captions and transcripts for audio and video content
  • Skip navigation option for people with mobility impairment.
  • Link text
  • Form labels should be accessible with valid tooltip
  • Keyboard operations for dynamic elements such as drop down.
  • PDF files on the web page need to be verified.

How can we achieve Accessibility Testing?

      We can achieve Accessibility Testing following way using accessibility testing engine Axe,

  • Axe browser extension.
  • Axe command line.
  • Axe core library.
Axe Browser Extension

      This approach is pretty straightforward to perform the Accessibility Testing. Using the Axe browser extension we can directly analyse and see the violation details per web page. The Axe browser extension available for Google Chrome, Firefox and Edge browser. Following are the direct links to get the browser extension based on your browser type,

      Here I would like to share the Accessibility Testing approach using Firefox browser. When you load the above Axe browser extension in Firefox, you will get a page like below,

      You can add this extension and restart Firefox browser. Next step, you can load the website that you need to perform the Accessibility Testing and open the developer tool in the browser (press F12). Go to the Axe tab and you can see below screen,

      Click on ANALYZE button to begin the test, after few seconds you will get the results that includes violation details, issue description, issue information, issue tags. You need to manually go through each and every violations and share the details with developer team to improve the accessibility of your website. Below is the results of the analyze,

Axe Command Line

      Axe command-line approach is another way to test the accessibility and identify the violations. This can be achieved with support of NodeJS and node package manager (npm). You must install NodeJS in your system prior to start this way of accessibility testing. Once NodeJS is installed, one can install axe-core by the following command:   

npm install axe-cli –g

      This installs axe globally and can be accessed from any folder in the file system. Once installed, to test your web page, use the following command,

axe web-page-url –timeout=120 –save results.json  –tags wcag2a

      The above command will test the page at the specific url and save the results in a file called results.json. The value of the time out can be changed. The tags option specifies the rules to be run. There are several tags supported out of the box by axe framework. For the above example we are running the WCAG2.0 rules at level A. We can use the same website that used during Axe browser extension and see the output.

axe –timeout=120 –save results.json –tags wcag2a

      A detailed report is saved in file results.json (available in your system user folder). Open this file in a JSON editor. Drill down to the violations attribute. The details of the Accessibility violations along with suggestions for fixes will be seen.

Axe Core Library

      This is one of the advanced approach to perform the accessibility testing and identify the violations. In this approach we can use Axe core library and automate the accessibility testing in between your functional regression automation. In the approach we are using the selenium dependency of com.deque.html.axe-core, and analyse method of AxeBuilder class. Once you create Accessibility Testing suite, you can configure in CI/CD pipelines for continuous testing. We will discuss more about this automation approach to do Accessibility Testing in the Second Part.

      I hope you really enjoyed to read this article and got the concept of Accessibility Testing. Try to utilize the browser extension and command-line methods in your testing world for a quick Accessibility Testing and analysis.

make it perfect!

Automate iPadOS Split View Multitasking With Appium

      iPad Pros run a slightly different version of iOS called iPadOS, and this version of iOS comes with several really useful features. One of the favorite is the ability to run two apps side-by-side. This is called Split View Multitasking by Apple, and getting it going involves a fair bit of gestural control for the user. Here’s how a user would turn on Split View:

  1. Open the first app they want to work with
  2. Show the multitasking dock (using a slow short swipe up from the bottom edge)
  3. Touch, hold, and drag the icon of the second app they want to work with to the right edge of the screen

      From this point on, the two apps will be conjoined in Split View until the user drags the app separator all the way to the edge of the screen, turning Split View off. Of course, both apps must be designed to support split view for this to work. 

      Let’s now discuss how we can walk through this same series of steps with Appium to get ourselves into Split View mode, and further be able to automate whichever app of the two we desire. Unfortunately, there’s no single command to make this happen, and we have to use a lot of tricky techniques to mirror the appropriate user behavior. Basically, we need to worry about these things:

  1. Ensuring both apps have been opened recently enough to show up in the dock
  2. Executing the correct gestures to show the dock and drag the app icon to trigger Split View
  3. Telling Appium which of the apps in the Split View we want to work with at any given moment

We’re going to describe how to achieve above steps.

Ensuring applications are in the dock

      For our strategy to work, we need the icon of the app we want to open in Split View in the dock. The best way to make this happen is to ensure that it has been launched recently in fact, most recently apart from the currently-running app. Let’s take a look at the setup for an example where we’ll load up both Reminders and Photos in Split View. In our case, we’ll want Reminders on the left and Photos on the right. Because we’re going to open up Photos on the right, we’ll actually launch it first in our test, so that we can close it down, open up Reminders, and then open up Photos as the second app.

DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.setCapability(“platformName”, “iOS”);
capabilities.setCapability(“platformVersion”, “13.3”);
capabilities.setCapability(“deviceName”, “iPad Pro (12.9-inch) (3rd generation)”);
capabilities.setCapability(“app”, PHOTOS);
capabilities.setCapability(“simulatorTracePointer”, true);
driver = new IOSDriver(new URL(” http://localhost:4723/wd/hub “), capabilities);
wait = new WebDriverWait(driver, 10);
size = driver.manage().window().getSize();

      In this setUp method, we also construct a WebDriverWait, and store the screen dimensions on a member field, because we’ll end up using them frequently. When we begin our test, the Photos app will be open. What we want to do next is actually terminate Photos, and launch Reminders. At this point, we’ve launched both the apps we want to work with, so they are both the most recently-launched apps, and will both show up in the recent apps section of the dock. Then, we go back to the Home Screen, so that the dock is visible:

// terminate photos and launch reminders to make sure they’re both the most recently launched apps
driver.executeScript(“mobile: terminateApp”, ImmutableMap.of(“bundleId”, PHOTOS));
driver.executeScript(“mobile: launchApp”, ImmutableMap.of(“bundleId”, REMINDERS));

// go to the home screen so we have access to the dock icons
ImmutableMap pressHome = ImmutableMap.of(“name”, “home”);
driver.executeScript(“mobile: pressButton”, pressHome);

      In the next step of this flow, we figure out where the Photos icon is, and save that information for later. Then we re-launch Reminders, so that it is active and ready to share the screen with Photos.

// save the location of the icons in the dock so we know where they are //when we need to drag them later, but no longer have access to them as //elements
Rectangle photosIconRect = getDockIconRect(“Photos”);

// relaunch reminders
driver.executeScript(“mobile: launchApp”, ImmutableMap.of(“bundleId”, REMINDERS));

      There is an interesting helper method here. getDockIconRect just takes an app name, and returns the position of its dock icon in the screen:

protected Rectangle getDockIconRect(String appName) {
By iconLocator = By.xpath(“//[@name=’Multitasking Dock’]//[@name='” + appName + “‘]”);
WebElement icon = wait.until(
return icon.getRect();

      Here we use an xpath query to ensure that the element we retrieve is actually the dock icon and not the home screen icon. Then, we return the screen rectangle representing that element, so that we can use it later.

Showing the dock and entering into Split View

      At this point we are ready to call a special helper method designed to slowly drag the dock up in preparation for running the Split View gesture:

// pull the dock up so we can see the recent icons, and give it time to settle

protected void showDock() {
swipe(0.5, 1.0, 0.5, 0.92, Duration.ofMillis(1000));

      We are using showDock method to perform a slow swipe from the middle bottom of the screen, up just far enough to show the dock. Now that the dock is shown, we can actually enter Split View. To do that, we make use of a special iOS-specific method mobile: dragFromToForDuration, which enables us to perform a touch-and-hold on the location of the Photos dock icon, then drag it to the right side of the screen. We wrap this up in a helper method called dragElement. Below is the implementation:

// now we can drag the photos app icon over to the right edge to enter split view, also give it a bit of time to settle
dragElement(photosIconRect, 1.0, 0.5, Duration.ofMillis(1500));

protected void dragElement(Rectangle elRect, double endXPct, double endYPct, Duration duration) {
Point start = new Point((int)(elRect.x + elRect.width / 2), (int)(elRect.y + elRect.height / 2));
Point end = new Point((int)(size.width * endXPct), (int)(size.height * endYPct));
driver.executeScript(“mobile: dragFromToForDuration”, ImmutableMap.of(“fromX”, start.x, “fromY”, start.y, “toX”, end.x, “toY”, end.y, “duration”, duration.toMillis() / 1000.0));}

      Essentially, we take the rect of a dock icon, pass in the ending x and y coordinate percentages, and the duration of the “hold” portion of the gesture. The dragElement helper converts these to the appropriate coordinates, and calls the mobile: method.

Working with simultaneously open applications

      At this stage in our flow, we’ve got both apps open in Split View! But if we take a look at the page source, we’ll find that we only see the elements for one of the apps. And in fact, we can only work with one app’s elements at a time. We can, however, tell Appium which app we want to work with, by updating the defaultActiveApplication setting to the bundle ID of whichever app you want to work with:

driver.setSetting(“defaultActiveApplication”, PHOTOS);
wait.until(ExpectedConditions.presenceOfElementLocated(MobileBy.AccessibilityId(“All Photos”)));
driver.setSetting(“defaultActiveApplication”, REMINDERS);
wait.until(ExpectedConditions.presenceOfElementLocated(MobileBy.AccessibilityId(“New Reminder”)));

      In the code above, you can see how we call driver.setSetting, with the appropriate setting name and bundle ID. After doing this for a given app, we can find elements within that app, and of course we can switch to any other app if we want as well.

      So that’s the way we can enter into a Split View and automate each application on the screen. Try to utilize above capabilities in your iPadOS automation.

Reference: Appium Pro

make it perfect!

Execute Your Arbitrary ADB Commands with Appium

      If you’re not a big Android person, you might not know about ADB, the “Android Debug Bridge”. ADB is a powerful tool provided as part of the Android SDK by Google, that allows running all sorts of interesting commands on a connected emulator or device. One of these commands is adb shell, which gives you shell access to the device filesystem (including root access on emulators or rooted devices). adb shell is the perfect tool for solving many problems.

      Appium did not allow running of arbitrary ADB commands. This is because Appium was designed to run in a remote environment, possibly sharing an OS with other services or Appium servers, and potentially many connected Android devices. It would be a huge security hole to give any Appium client the full power of ADB in this context. Recently, the Appium team decided to unlock this functionality behind a special server flag, so that someone running an Appium server could intentionally open up this security hole. This is achieved using –relaxed-security flag. So you can now start up Appium like this to run arbitrary ADB commands,

appium –relaxed-security

      With Appium running in this mode, you have access to a new “mobile:” command called “mobile: shell“. The Appium “mobile:” commands are special commands that can be accessed using executeScript (at least until client libraries make a nicer interface for taking advantage of them). Here’s how a call to “mobile: shell” looks in Java:

driver.executeScript(“mobile: shell”, arg);

arg needs to be a JSONifiable object with two keys:

  • command: a String, the command to be run under adb shell.
  • args: an array of Strings, the arguments passed to the shell command.

      For example, let’s say we want to clear out the pictures on the SD card, and that on our device, these are located at /mnt/sdcard/Pictures. If we were running ADB on our own without Appium, we’d accomplish our goal by running:

adb shell rm -rf /mnt/sdcard/Pictures/*.*

      To translate this to Appium’s “mobile: shell” command, we simply strip off adb shell from the beginning, and we are left with rm -rf /mnt/sdcard/Pictures/*.*

      The first word here is the “command”, and the rest constitute the “args”. So we can construct our object as follows:

List removePicsArgs = Arrays.asList(“-rf”, “/mnt/sdcard/Pictures/.”);
Map removePicsCmd = ImmutableMap.of(“command”, “rm”, “args”, removePicsArgs);
driver.executeScript(“mobile: shell”, removePicsCmd);

      We can also retrieve the result of the ADB call, for example if we wish to verify that the directory is now indeed empty:

List lsArgs = Arrays.asList(“/mnt/sdcard/Pictures/.”);
Map lsCmd = ImmutableMap.of(“command”, “ls”,”args”, lsArgs);
String lsOutput = (String) driver.executeScript(“mobile: shell”, lsCmd);
Assert.assertEquals(“”, lsOutput);

      The output of our command is returned to us as a String which we can do whatever we want with, including making assertions on it.

      In this article, I hope you got the power of ADB through a few simple file system commands. You can actually do many more useful things than delete files with ADB, so go out there and have fun with it in your automation flow.

Reference: Appium Pro

make it perfect!

Android 11 Highlights

       We know that Android 11 was released on September 8th, 2020. Android 11 is optimized for how you use your phone. Giving you powerful device controls. And easier ways to manage conversations, privacy settings and so much more. Let’s see the Android 11 Highlights,

  • Manage your conversations: Get all your messages in one place.See, respond to and control your conversations across multiple messaging apps. All in the same spot. Then select people you always chat with. These priority conversations show up on your lock screen. So you never miss anything important. With Android 11, you can pin conversations so they always appear on top of other apps and screens. Bubbles keep the conversation going—while you stay focused on whatever else you’re doing. Nearby Share helps quickly and securely send files, videos, map locations and more to devices nearby. Works with Android devices, Chromebooks or devices running Chrome browser.
  • Capture and share content: Screen recording lets you capture what’s happening on your phone. And it’s built right into Android 11, so you don’t need an extra app. Record with sound from your mic, your device or both. Select text from your apps. Grab images too. On Pixel devices, you can easily copy, save and share info between many apps. Like your browser, your delivery app or from the news.
  • Helpful tools that predict what you want: It support Smart reply that is get suggested responses in conversations. App suggestions feature provides facility to easily get to apps you need most. Smart folders provides smarter ways to organize your apps.
  • Control your phone with your voice: With Android 11, Voice Access is faster and easier to use. Intuitive labels on apps help you control and navigate your phone, all by speaking out loud. Even use Voice Access offline, for more support whenever you need it.
  • Accessibility: Lookout: Lookout now has two new modes. Scan Document and Food Label help people with low vision or blindness get things done faster and more easily. Opening Lookout also turns on your flashlight, helping users read in low light. And Lookout is now available on all 2GB+ devices running Android 6.0 or later.
  • 5G detection API and Ethernet tethering: With new APIs, apps know if you’re on a 5G connection. So you get better performance. Share a tethered internet connection with a USB ethernet dongle.
  • Digital Well-being: Bedtime Mode quiets your phone when it’s time to go to sleep. Schedule it to run automatically or while your phone charges as you rest. Your screen switches to grayscale and your notifications go silent with Do Not Disturb. The new bedtime feature in Clock helps you set a healthy sleep schedule. Track screen time at night and fall asleep to calming sounds. Then wake up to your favorite song. Or use the Sunrise Alarm that slowly brightens your screen to start the day.
  • Enterprise: Get full privacy from IT on your work profile on company-owned devices. Plus new asset management features for IT to ensure security without visibility into personal usage. Connect work and personal apps to get a combined view of your information in places like your calendar or your reminders. Easily disconnect from work. With Android 11, you can now set a schedule to automatically turn your work profile on and off. Use the work tab in more places to share and take actions across work and personal profiles. See work tabs when sharing, opening apps and in settings. Get a new notification if your IT admin has turned on location services on your managed device.
  • Device Controls: Control your connected devices from one place. Set the temperature to chill, then dim your lights. All from a single spot on your phone.3 Just long press the power button to see and manage your connected devices. Making life at home that much easier.
  • Media Controls: Switch from your headphones to your speaker without missing a beat. Tap to hear your tunes or watch video on your TV. With Android 11, you can quickly change the device that your media plays on.
  • Connect Android to your car. Skip the cable: Hit the road without plugging in. Android Auto now works wirelessly with devices running Android 11—so you can bring the best of your phone on every drive.
  • Privacy and Security: You control what apps can access. Take charge of your data with Android. You choose whether to give apps you download permission to access sensitive data. Or not. So you stay better protected. Give one-time permissions to apps that need your mic, camera or location. The next time the app needs access, it must ask for permission again. If you haven’t used an app in a while, you may not want it to keep accessing your data. So Android will reset permissions for your unused apps. You can always turn permissions back on. With Android 11, you get even more security and privacy fixes sent to your phone from Google Play. The same way all your other apps update. So you get peace of mind. And your device stays armed with the most recent defense.

Thanks for spending your time here to read this article and know about the new features of Android 11.


make it perfect!