Wednesday, July 30, 2025

๐Ÿ” How I Used OOPS Concepts in My Selenium Automation Framework (with Real-World Examples)


In today’s test automation world, building scalable, maintainable, and readable frameworks is non-negotiable. One of the key enablers of such robust automation design is the effective use of Object-Oriented Programming (OOPS) principles.

In this post, I’ll walk you through how I have practically applied OOPS concepts like Encapsulation, Inheritance, Abstraction, and Polymorphism in building a modern Selenium automation framework using Java and Page Object Model (POM)—with real-world use cases from a payments application.


๐Ÿงฑ 1. Encapsulation

 – Grouping Page Behaviors & Data

In POM, each web page is represented by a Java class. All locators and associated actions (methods) are bundled into the same class, providing encapsulation.

Example:

LoginPage.java might contain:

public class LoginPage {

    @FindBy(id="username")

    private WebElement usernameInput;


    @FindBy(id="password")

    private WebElement passwordInput;


    @FindBy(id="loginBtn")

    private WebElement loginButton;


    public void login(String user, String pass) {

        usernameInput.sendKeys(user);

        passwordInput.sendKeys(pass);

        loginButton.click();

    }

}

This hides internal mechanics from external classes, exposing only the method login()—a clean interface for test classes.


๐Ÿงฌ 2. Inheritance

 – Reusability of Test Utilities

Inheritance is used to extend common functionality across test components like base test setup, common utilities, or driver management.

Example:

public class BaseTest {

    protected WebDriver driver;


    @BeforeMethod

    public void setup() {

        driver = new ChromeDriver();

        driver.manage().timeouts().implicitlyWait(Duration.ofSeconds(10));

    }


    @AfterMethod

    public void tearDown() {

        driver.quit();

    }

}

Then, individual test classes inherit this:

public class LoginTests extends BaseTest {

    @Test

    public void testValidLogin() {

        new LoginPage(driver).login("user", "pass");

        // assertions

    }

}

๐ŸŽญ 3. Polymorphism

 – Interface-Based Design

Polymorphism allows flexible and scalable design, especially when using interface-driven development.

Use Case: Suppose your framework needs to support both Chrome and Firefox.

public interface DriverManager {

    WebDriver getDriver();

}

Concrete implementations:

public class ChromeManager implements DriverManager {

    public WebDriver getDriver() {

        return new ChromeDriver();

    }

}


public class FirefoxManager implements DriverManager {

    public WebDriver getDriver() {

        return new FirefoxDriver();

    }

}

Now, switching browsers is easy without changing test logic:

DriverManager manager = new ChromeManager(); // or FirefoxManager

WebDriver driver = manager.getDriver();


๐Ÿงฉ 4. Abstraction

 – Hiding Implementation Behind Layers

Abstraction is used in frameworks via utility and wrapper classes to hide the complexity of Selenium commands.

Example: Create a utility method for dropdown handling:

public class DropdownUtils {

    public static void selectByVisibleText(WebElement dropdown, String text) {

        new Select(dropdown).selectByVisibleText(text);

    }

}

Now testers use just:

DropdownUtils.selectByVisibleText(dropdownElement, "United States");

This hides internal logic and improves readability.


๐Ÿ Final Thoughts

OOPS principles are not just theoretical—they are the foundation of real-world, enterprise-grade test automation frameworks. By applying:

  • Encapsulation (clean page classes),

  • Inheritance (shared test logic),

  • Polymorphism (browser/interface abstractions), and

  • Abstraction (utility layers),

you build a test architecture that’s scalable, readable, and easily maintainable.

This approach isn’t limited to Selenium. You can apply the same mindset in API testing frameworks, Appium, Playwright, and beyond.

๐Ÿš€ Turbo Intruder: Unleashing High-Speed Race Condition Testing with Burp Suite


When it comes to identifying race conditions and testing concurrency issues in APIs, Turbo Intruder is a must-have weapon in your offensive security toolkit. This powerful Burp Suite extension is built to launch blazing-fast HTTP requests—ideal for race condition exploits that require precise timing and volume.

Here’s a quick guide to getting started:


๐Ÿ› ️ Setting Up Turbo Intruder in Burp Suite

Step 1: Launch Burp Suite and go to the top menu → Extensions.

Step 2: In the Extensions tab, click BApp Store.

Step 3: Search for Turbo Intruder, then click Install.


⚙️ Running Your First Attack

Once installed, it’s time to get hands-on:

  • Select any API request in Burp’s HTTP history or Repeater.

  • Right-click the request → Navigate to Extensions > Turbo Intruder > Send to Turbo Intruder.

๐Ÿงฉ Customize the Request

  • Insert a %s token into any part of the request (e.g., a header or query parameter) where you want to inject payloads.

  • Scroll down to the scripting panel and modify the Python script to control how the payloads are fired—sequentially, concurrently, or in bursts.

 ๐Ÿ’ฅ Launch the Attack

  • Click Attack to fire off the customized payloads at high speed.

  • Analyze the results to detect anomalies that signal race conditions or concurrency flaws.

๐Ÿ” Why Turbo Intruder?

  • Speed: It outpaces traditional Burp tools with asynchronous, multi-threaded requests.

  • Control: Fine-grained scripting lets you simulate real-world race conditions.

  • Visibility: Detailed results make it easier to identify timing-related bugs.

๐Ÿง  Pro Tip

Race conditions often result in subtle, non-deterministic behavior. Run attacks multiple times and compare response patterns. Look out for HTTP 409, duplicated resources, or unauthorized access anomalies.


Try it out and take your API security testing to the next level!

Let me know your experience with Turbo Intruder or drop your favorite race condition use case in the comments ๐Ÿ‘‡


Tuesday, July 29, 2025

๐Ÿ›ก️ How to Test Security and Fraud Scenarios for Digital Payments


In a world where digital payments power everything from daily groceries to global remittances, ensuring security and fraud prevention is no longer a luxury—it’s mission-critical.

Whether you’re working on PayPal, Stripe, Razorpay, or GrabPay, your role as a QA or automation engineer is to make sure payments are secure, resilient, and abuse-proof.

In this post, we’ll walk through the strategies, techniques, and tools to test security and fraud detection in modern payment systems.


๐Ÿ” Security Testing in Digital Payments

1. Authentication & Authorization

Ensure that only the right users can initiate and complete payments.

  • Verify OTP, PIN, password, and biometric flows.

  • Test token/session expiry and renewal.

  • Simulate brute-force attacks to ensure rate-limiting and lockout work.

  • Ensure role-based permissions are enforced across all endpoints.

๐Ÿง  Tip: Use tools like Postman (with JWT plugin) or Burp Suite to test token manipulation and expiry.


2. Secure Transmission (SSL/TLS)

Payment data must be encrypted in transit.

  • Confirm HTTPS is enforced across all endpoints.

  • Reject weak cipher suites and expired/invalid SSL certificates.

  • Validate mobile apps implement certificate pinning.

Try capturing requests using Wireshark or Burp Proxy to verify encrypted transport.


3. Input Validation & Injection

Test every field involved in the transaction process.

  • Simulate SQL injection attacks in billing/payment address fields.

  • Test for XSS in saved cards or transaction summaries.

  • Use fuzzing tools to detect unhandled payloads in backend APIs.


4. Tokenization & PCI Compliance

Ensure sensitive card or bank data is never stored or displayed.

  • Check logs for PANs or CVVs—none should exist.

  • Confirm that tokens are used in place of actual card data.

  • Ensure your system complies with PCI DSS standards.


⚠️ Fraud Testing Scenarios

1. Business Logic Abuse

Fraud doesn’t always come from security holes—sometimes it’s clever users exploiting loopholes.

  • Test coupon abuse by simulating multiple users on a single device.

  • Attempt to game referral systems using parallel devices or emulators.

  • Try multiple cashback-eligible transactions under abnormal timelines.


2. Anomaly Detection & Geo Abuse

Your backend should detect and respond to abnormal behavior.

  • Simulate transactions from:

    • Blocked countries

    • VPN/tor networks

    • Inconsistent IP-device combinations

  • Trigger alerts for large amounts with unusual metadata.


3. Replay & Duplicate Payment Attacks

Ensure the system can handle re-sent or duplicate payment requests.

  • Test by refreshing the payment page and resubmitting the request.

  • Reuse expired OTPs and check if validation is still enforced.

  • Test if the system honors idempotency keys to avoid double charges.


๐Ÿงช Penetration & Automation Tools

Here are some of the tools commonly used in payment system security testing:

  • ๐Ÿ› ️ OWASP ZAP: Passive scanning and basic fuzzing

  • ๐Ÿงช Burp Suite: Full web/API pentesting and token tampering

  • ๐Ÿ”„ JMeter: Load testing and flood-simulation

  • ๐Ÿ•ต️ WireMock: Simulate gateway failure/response delays

  • ⚙️ Postman + Scripting: Token lifecycle and header replay testing


✅ Sample Test Scenarios

Here are critical test scenarios every QA or SDET should include when testing digital payment security and fraud systems:

  • ๐Ÿ” Carding Attack Simulation

    Try performing hundreds of small transactions rapidly using random or stolen card numbers.

    ➤ Expected: System blocks the IP, triggers rate limiting, or requires CAPTCHA.

  • ๐Ÿ”ข Brute Force OTP Attempts

    Continuously try random OTPs during checkout or login.

    ➤ Expected: User account gets locked or temporary timeout is enforced after a threshold.

  • ๐Ÿงพ Duplicate Payment Requests

    Submit the same payment request multiple times by refreshing or resending it.

    ➤ Expected: Only one transaction should succeed (idempotency should be enforced).

  • ๐ŸŒ Payments from Blocked Locations

    Try initiating payments using VPNs or from geographies that are disallowed.

    ➤ Expected: Gateway or system blocks the request based on geo or IP reputation.

  • ๐Ÿ”„ Session Replay by Refreshing Payment Page

    Refresh the payment page or use the back button and attempt a resubmission.

    ➤ Expected: Token/session should be invalidated, and user should be redirected to start fresh.

  • ๐Ÿท️ Coupon or Promo Abuse

    Apply the same promo or referral code across multiple user accounts/devices.

    ➤ Expected: Backend should detect abuse and flag or restrict suspicious users.

  • ๐Ÿ” Expired OTP or Token Reuse

    Reuse expired authorization codes or payment tokens.

    ➤ Expected: Server rejects the request with an appropriate error message.



๐Ÿ“Š Monitoring & Alerting (Post-release)

Don’t stop after testing. Monitor live systems for:

  • Unusual transaction spikes by IP or card BIN

  • High failure rates on certain gateways or banks

  • Repeated coupon or referral attempts

Use tools like ELK StackGrafanaPrometheus, and Splunk for visibility.


๐ŸŽฏ Final Thoughts

Security and fraud testing is not just about using tools—it’s about thinking like an attacker while keeping the business context in mind.

You need to:

  • Blend white-box + black-box testing.

  • Automate what’s repetitive.

  • Update your threat models regularly.

  • Work closely with product, dev, and security teams.

As testers, we’re not just validating features — we’re guarding the gates of trust in the digital economy.

๐Ÿ“Š Mobile App Testing Metrics: What Every Senior QA Engineer Should Track


In today’s mobile-first world, quality isn’t just about “does it work?” — it’s about performance, stability, and experience across thousands of devices and real-world conditions.

In this post, I’ll break down the essential mobile app testing metrics across both functional and non-functional categories, and share why each is critical for modern QA teams.


✅ Functional Testing Metrics (Ensuring the App Works as Expected)

Functional metrics validate how well your app delivers expected features. These metrics give you confidence that the app is ready for real users.

  • Test Case Coverage: Helps you measure how much of the app’s core workflows are validated through test cases (manual or automated).

  • Pass/Fail Rate: Tells you how stable the build is. A high failure rate signals instability or regression.

  • Defect Density: Tracks how many bugs are found per feature or module. It’s useful for identifying hotspots or weak areas in the app.

  • Bug Reopen Rate: Measures how often closed bugs reappear. A high reopen rate suggests incomplete fixes or misunderstood issues.

  • Automation Coverage: Indicates what percentage of tests are automated. It helps identify areas that can benefit from automation for faster regression cycles.

  • Crash Reproduction Rate: Reflects how reliably testers can reproduce reported crashes — critical for triaging user-submitted issues.

  • Exploratory Testing Insights: Captures notes and findings from unscripted testing, often revealing usability issues and edge cases.


๐Ÿš€ Non-Functional Testing Metrics (Ensuring the App is Fast, Stable, and Safe)

Non-functional testing metrics focus on performance, stability, and overall experience — factors that directly influence user retention and app ratings.

  • App Start Time: Measures how long the app takes to open, especially after a cold launch. Anything over 2 seconds can degrade the user experience.

  • Memory Usage & Leaks: Helps detect memory spikes or leaks that could lead to slowdowns or crashes, especially on low-end devices.

  • Battery Consumption: Evaluates how the app affects device battery life — a key concern for mobile users.

  • Crash & ANR Rate: Tracks how often the app crashes or becomes unresponsive. Tools like Firebase Crashlytics or Sentry can monitor this in real time.

  • Network Performance: Focuses on how the app behaves under different network conditions (3G, 4G, offline, etc.). Includes API latency and error rates.

  • App Size & Load Time: Larger apps take longer to install and may deter users from downloading. It’s also a factor in emerging markets with limited storage or data.

  • Security Metrics: Includes how securely the app handles sensitive data (e.g., token storage, permission usage, SSL pinning).

  • Push Notification Delivery Rate: Measures the reliability of push notifications, especially when the app is in background or killed state.

  • Session Length & Retention Indicators: While often tracked by product teams, these are useful for QA when analyzing how app performance impacts user behavior.


๐Ÿ›  Tools I Use to Track These Metrics

To track these metrics efficiently, I use a combination of industry-standard tools:

  • Test execution & automation: TestRail, Zephyr, Xray, Allure

  • Automation & CI/CD: Appium, Espresso, Detox, Jenkins, GitHub Actions

  • Crash reporting & performance monitoring: Firebase Crashlytics, Sentry, New Relic

  • Security scanning: OWASP Mobile Checklist, MobSF, Burp Suite

  • User analytics & behavior: Mixpanel, PostHog, Google Analytics for Firebase


๐Ÿ“ˆ Metrics I Include in QA Dashboards or Release Reports

When summarizing test results for stakeholders or leadership, I often include:

  • The number of tests executed, passed, failed, or skipped

  • Automation health (execution duration, flaky test rate)

  • High-priority defect trends across sprints or builds

  • Distribution of failures by device, OS version, or app module

  • Crash-free session rates post-deployment

  • Memory, startup time, and battery benchmarks over releases


๐ŸŽฏ Final Thoughts

Mobile testing isn’t just about clicking buttons—it’s about measuring what matters. The right metrics help QA teams move from reactive testers to proactive quality advocates. Whether you’re testing a fintech app in Singapore or a delivery platform in Indonesia, these metrics help you build confidence, ship faster, and improve user satisfaction at scale.

๐Ÿ“ฑ Mobile App Testing: 10 Critical Test Scenarios You Can’t Miss (That Go Beyond Web UI Testing)


When it comes to testing mobile applications, the challenges go far beyond what typical web UI testing entails. Mobile apps must work flawlessly across a fragmented ecosystem of devices, screen sizes, OS versions, sensors, network conditions—and still deliver a high-performance experience. That’s why test engineers must design test cases that account for mobile-specific conditions that web-based apps don’t encounter.

In this post, we’ll break down the 10 critical mobile app test cases that every QA engineer should prioritize—and explain how they differ from traditional web UI testing.


✅ 1. Installation & Launch

Unlike web apps, mobile apps must be installed, upgraded, and uninstalled through OS-specific stores like Google Play or Apple App Store.

Test Cases:

  • App installs/uninstalls cleanly on all supported devices.

  • Launches successfully after a clean install or version upgrade.

  • First-launch behavior (onboarding, permission prompts) works without failure.


๐ŸŒ 2. Device & OS Compatibility

Mobile ecosystems are highly fragmented. You must ensure compatibility across OS versions, hardware specs, and screen dimensions.

Test Cases:

  • Verify app functionality on Android 10–14 and iOS 14–17.

  • Check responsiveness across tablets, foldables, and small-screen phones.

  • Test on low-RAM or budget devices (to catch memory issues).


๐Ÿ“ถ 3. Network Conditions

Mobile users are always switching between 5G, Wi-Fi, and even no network. Your app must handle this gracefully.

Test Cases:

  • App behaves predictably with no internet or low bandwidth.

  • Test auto-retries for failed API calls due to timeouts.

  • Switching from Wi-Fi to mobile data mid-session doesn’t break functionality.


๐Ÿ”„ 4. Background & Resume Behavior

A mobile app should maintain state and not crash when interrupted by a phone call or switching to another app.

Test Cases:

  • App resumes gracefully from background state.

  • Data entry is preserved when the user switches away and returns.

  • Proper behavior after a cold restart or after device reboot.


๐Ÿ”‹ 5. Battery & Performance

Performance testing on mobile goes beyond responsiveness—it’s also about battery and resource consumption.

Test Cases:

  • No excessive battery drain during idle or active use.

  • Monitor CPU/memory usage over time (watch for leaks).

  • Measure cold and warm start times.


๐Ÿ” 6. Permission Handling

Mobile apps rely on permissions to access hardware features. You must test both granting and denying permissions.

Test Cases:

  • App only requests necessary permissions.

  • Behavior is graceful when permissions are denied or revoked.

  • Scoped storage compliance (Android 11+) is in place.


๐Ÿ”” 7. Push Notifications

Push notifications are a core engagement channel and must work across all app states.

Test Cases:

  • Push received when app is in background or killed.

  • Tapping the notification leads to correct app screen.

  • Notifications respect user opt-in/opt-out settings.


๐Ÿ“ฒ 8. Gestures & UI Flexibility

Mobile users interact via gestures and virtual keyboards, making UX more dynamic than web.

Test Cases:

  • UI responds correctly to swipes, taps, long presses, and pinch-to-zoom.

  • Keyboard overlays don’t hide important input fields.

  • Smooth adaptation to dark mode, orientation changes (portrait ↔ landscape).


๐Ÿ” 9. Security Testing

Security is non-negotiable, especially with personal data or financial transactions involved.

Test Cases:

  • Secure storage for sensitive data (e.g., keystore/token vault).

  • No sensitive logs left in logcat or crash logs.

  • Behavior on rooted/jailbroken devices is safely restricted.


๐Ÿ“Š 10. Analytics & Store Compliance

Apps often embed SDKs for analytics and crash reporting, and must comply with store policies.

Test Cases:

  • Verify Firebase, GA, or Crashlytics events are firing correctly.

  • App follows Play Store / App Store policy (e.g., no deprecated APIs).

  • Correct versioning and metadata shown in store listing.


    ๐Ÿงช Final Thoughts

    If you’re only testing your mobile app like a web app, you’re missing half the picture. Mobile brings unique challenges and requires a deeper, device-aware test strategy. The 10 critical mobile test areas above should form the core of your test planning, especially for high-scale production apps used across a variety of devices and conditions.

Monday, July 28, 2025

๐Ÿš€ Introducing the Universal API Testing Tool — Built to Catch What Manual Testing Misses


In today’s software-driven world, APIs are everywhere — powering everything from mobile apps to microservices. But with complexity comes risk. A single missed edge case in an API can crash systems, leak data, or block users. That’s a huge problem.

After years of working on high-scale automation and quality engineering projects, I decided to build something that tackles this challenge head-on:

๐Ÿ‘‰ A Universal API Testing Tool powered by automation, combinatorial logic, and schema intelligence.

This tool is designed not just for test engineers — but for anyone who wants to bulletproof their APIs and catch critical bugs before they reach production.


๐Ÿ” The Problem with Manual API Testing

Let’s face it: manual API testing, or even scripted testing with fixed payloads, leaves massive blind spots. Here’s what I’ve consistently seen across projects:

  • ๐Ÿ” Happy path bias: Most tests cover only expected (ideal) scenarios.

  • ❌ Boundary and edge cases are rarely tested thoroughly.

  • ๐Ÿงฑ Schema mismatches account for over 60% of integration failures.

  • ๐Ÿ”„ Complex, nested JSON responses break traditional test logic.

Even with the best intentions, manual testing only touches ~15% of real-world possibilities. The rest? They’re left to chance — and chance has a high failure rate in production.


๐Ÿ’ก Enter: The Universal API Testing Tool

This tool was created to turn a single API request + sample response into a powerful battery of intelligent, automated test cases. And it does this without relying on manually authored test scripts.

Let’s break down its four core pillars:


๐Ÿ” 1. Auto-Schema Derivation

Goal: Ensure every response conforms to an expected structure — even when you didn’t write the schema.

  • Parses sample responses and infers schema rules dynamically

  • Detects type mismatches, missing fields, and violations of constraints

  • Supports deeply nested objects, arrays, and edge data structures

  • Validates responses against actual usage, not just formal docs

๐Ÿ”ง Think of it like “JSON Schema meets runtime intelligence.”


๐Ÿงช 2. Combinatorial Test Generation

Goal: Generate hundreds of valid and invalid test cases automatically from a single endpoint.

  • Creates diverse combinations of optional/required fields

  • Performs boundary testing using real-world data types

  • Generates edge case payloads with minimal human input

  • Helps you shift-left testing without writing 100 test cases by hand

๐Ÿ“ˆ This is where real coverage is achieved — not through effort, but through automation.


๐Ÿ“œ 3. Real-Time JSON Logging

Goal: Provide debuggable, structured insights into each request/response pair.

  • Captures and logs full payloads with status codes, headers, and durations

  • Classifies errors by type: schema, performance, auth, timeout, etc.

  • Fully CI/CD compatible — ready for pipeline integration

๐Ÿงฉ Imagine instantly knowing which combination failed, why it failed, and what payload triggered it.


๐Ÿ” 4. Advanced Security Testing

Goal: Scan APIs for common and high-risk vulnerabilities without writing separate security scripts.

  • Built-in detection for:

    • XSS, SQL Injection, Command Injection

    • Path Traversal, Authentication Bypass

    • Regex-based scans for sensitive patterns (UUIDs, tokens, emails)

  • Flags anomalies early during development or staging

๐Ÿ›ก️ You don’t need a separate security audit to find the obvious vulnerabilities anymore.


⚙️ How It Works (Under the Hood)

  • Developed in Python, using robust schema libraries and custom validation logic

  • Accepts a simple cURL command or Postman export as input

  • Automatically generates:

    • Schema validators

    • Test payloads

    • Execution reports

  • Debug mode shows complete request/response cycles for every test case


๐Ÿ“ˆ What You Can Expect

The tool is in developer preview stage — meaning results will vary based on use case — but here’s what early adopters and dev teams can expect:

  • ⏱️ Save 70–80% of manual testing time

  • ๐Ÿž Catch 2–3x more bugs by testing combinations humans often miss

  • ⚡ Reduce integration testing time from days to hours

  • ๐Ÿ”’ Get built-in security scans with every API run — no extra work required


๐Ÿงฐ Try It Yourself

๐Ÿ”— GitHub Repository

๐Ÿ‘‰ github.com/nsharmapunjab/frameworks_and_tools/tree/main/apitester


๐Ÿ’ฌ Your Turn: What’s Your Biggest API Testing Challenge?

I’m actively working on v2 of this tool — with plugin support, OpenAPI integration, and enhanced reporting. But I want to build what developers and testers actually need.

So tell me:

➡️ What’s the most frustrating part of API testing in your projects?

Drop a comment or DM me. I’d love to learn from your use cases.


๐Ÿ‘‹ Work With Me

Need help building test automation frameworks, prepping for QA interviews, or implementing CI/CD quality gates?

๐Ÿ“ž Book a 1:1 consultation: ๐Ÿ‘‰ topmate.io/nitin_sharma53


Thanks for reading — and if you found this useful, share it with your dev or QA team. Let’s raise the bar for API quality, together.

#APITesting #AutomationEngineering #QualityAssurance #DevOps #OpenSource #TestAutomation #PythonTools #API #SDET #NitinSharmaTools

๐Ÿ”ง Intercepting Android API Traffic with Burp Suite and a Rooted Emulator

Testing the security and behavior of Android apps often requires intercepting and analyzing API requests and responses. In this guide, we’ll walk through setting up an Android emulator to work with Burp Suite, enabling interception of HTTPS traffic and performing advanced manipulations like brute-force attacks.

⚠️ Requirements:

  • Android Emulator (AVD)
  • Root access (via Magisk)
  • Burp Suite (Community or Professional Edition)


๐Ÿ›  Step-by-Step Setup Guide

✅ 1. Install Burp Suite

  • Download Burp Suite Community Edition (2023.6.2) from PortSwigger.

  • Launch the app and navigate to:

    Proxy → Options → Proxy Listeners → Import/Export CA Certificate

✅ 2. Export and Install Burp CA Certificate

  1. Export the CA Certificate in DER format and save it with a .crt extension.

  2. Transfer this .crt file to your emulator (drag and drop works fine).

  3. On the emulator:

    • Open Settings → Security → Encryption & Credentials

    • Tap Install from SD card

    • Choose the transferred certificate.

  4. Confirm installation:

    • Go to Trusted Credentials → User and verify the certificate is listed.


๐Ÿ”“ 3. Root the Emulator

To trust user-installed certificates at the system level (bypassing Android’s certificate pinning), you must root the emulator.

Tools You’ll Need:

Rooting Process:

  1. Ensure your AVD is running before executing the root script.

  2. Unzip rootAVD and run the following command in terminal:

./rootAVD.sh ~/Library/Android/sdk/system-images/android-33/google_apis/arm64-v8a/ramdisk.img

  1. ✅ For Play Store-enabled AVDs, use google_apis_playstore in the path.
  2. Your emulator will shut down automatically after patching.


⚙️ 4. Install Magisk & Trust Certificates

  1. Restart your emulator and open the Magisk app.

  2. Navigate to Modules → Install from Storage → Select AlwaysTrustUserCerts.zip

  3. The emulator will restart again.

  4. Verify the certificate now appears under System certificates, not just User.


๐ŸŒ 5. Connect Emulator to Burp Suite

In Burp Suite:

  1. Go to Proxy → Options → Add Listener

  2. Choose an IP from the 172.x.x.x range.

  3. Set port to 8080 and click OK.

On the Emulator:

  1. Connect to Wi-Fi.

  2. Long press the connected Wi-Fi → Modify Network → Proxy: Manual

  3. Set:

    • Host: Burp Suite IP (e.g., 172.x.x.x)

    • Port: 8080

    • Save the changes.


๐Ÿš€ 6. Intercept Traffic

  • Launch your Android debug app.

  • Open HTTP History in Burp Suite to monitor incoming requests/responses.


๐ŸŽฏ Conclusion

You now have a fully configured Android emulator that allows you to:

  • Intercept and inspect HTTPS API traffic

  • Analyze request/response headers and payloads

  • Perform manual or automated security tests (e.g., brute force attacks)

This setup is ideal for mobile QA, security testing, or reverse engineering Android applications in a safe, isolated environment.


๐Ÿ’ฌ Feel free to bookmark or share this guide with fellow testers or developers diving into mobile app traffic inspection.
Happy hacking!

Saturday, July 26, 2025

AI Engineering in 2025: Skills, Tools, and Paths to Success

Agentic AI and other buzzwords are emerging almost monthly if not more often. In reality they all describe different variations of Agentic Systems, it might be n agentic workflow or multi-agent system, it’s just a different topology under the same umbrella.

If you are considering a career in AI Engineering in 2025, it might feel overwhelming and that is completely normal.

But you need to remember - you are not too late to the game. The role as such has only emerged over the past few years and is still rapidly evolving.

In order to excel in this competitive space, you will need a clear path and focused skills.

Here is a roadmap you should follow if you want to excel as an AI Engineer in today’s landscape.


Fundamentals - learn as you go.

I have always been a believer that learning fundamentals is key to your career growth. This has not changed.

However, I have to admit that the game itself has changed with the speed that the industry is moving forward. Staring of with fundamentals before anything else is no longer an option. Hence, you should be continuously learning them as you build out modern AI Engineering skillset.

Here is a list of concepts and technologies I would be learning and applying in my day-to-day if I were to start fresh.

The Fundamentals.

Python and Bash:

  • FastAPI - almost all of the backed services implemented in Python are now running as FastAPI servers.

  • Pydantic - the go to framework for data type validation. It is now also a Python standard for implementing structured outputs in LLM based applications.

  • uv - the next generation Python package manager. I haven’t seen any new projects not using it.

  • git - get your software version control fundamentals right.

  • Asynchronous programming - extremely important in LLM based applications as your Agentic topologies will often benefit from calling multiple LLM APIs asynchronously without blocking.

  • Learn how to wrap your applications into CLI tools that can be then executed as CLI scripts.

Statistics and Machine Learning:

  • Understand the non-deterministic nature of Statistical models.

  • Types of Machine Learning models - it will help you when LLMs are not the best fit to solve non-deterministic problem.

  • General knowledge in statistics will help you in evaluating LLM based systems.

  • Don’t get into the trap of thinking that AI Engineering is just Software Engineering with LLMs, some maths and statistics is involved.


LLM and GenAI APIs.

You should start simple, before picking up any LLM Orchestration Framework begin with native client libraries. The most popular is naturally OpenAI’s client, but don’t disregard Google’s genai library, it is not compatible with OpenAI APIs but you will find use cases for Gemini models for sure.

So what should you learn?

LLM APIs.

Types of LLMs:

  • Foundation vs. Fine-tuned.

  • Code, conversational, medical etc.

  • Reasoning Models.

  • Multi-Modal Models.

Structured outputs:

  • Learn how OpenAI and Claude enforces structured outputs via function calling and tool use.

  • Try out simple abstraction libraries like Instructor - they are enough for most of the use cases and uses pydantic for the structure definition natively.

Prompt Caching:

  • Learn how KV caching helps in reducing generation latency and costs.

  • Native prompt caching provided by LLM providers.

  • How LLM serving frameworks implement it in their APIs (e.g. vLLM).

Model Adaptation.

I love the term Model Adaptation. The first time (and maybe the only time) I’ve seen it in literature was in the book “AI Engineering” by 

. The term ideally encompasses what we, AI Engineers, do to make LLMs perform actions we expect.

What should you learn?

Model Adaptation.

Prompt Engineering:

  • Learn the proper prompt structure. It will differ depending on the provider you are using.

  • Understand context size limitations.

  • Prompting techniques like Chain of Thought, Tree of Thought, Few-shot.

  • Advanced prompting techniques: Self-consistency, Reflection, ReAct.

Tool Use:

  • Tool Use is not magic, learn how it is implemented via context manipulation.

  • Don’t rush to agents yet, learn how LLMs are augmented with tools first.

  • You might want to pick up a simple LLM Orchestrator Framework at this stage.

Storage and Retrieval.

Storage and Retrieval.

Vector Databases:

  • Learn strengths and weaknesses of vector similarity search.

  • Different types of Vector DB indexes: Flat, IVFFlat, HNSW.

  • When PostgreSQL pgvector is enough.

Graph Databases:

  • High level understanding about Graph Databases.

  • Don’t spend too much time here as there is still limited use for Graph DBs even though the promises connected with Graph Retrieval were and still are big.

  • Current challenges still revolve around the cost of data preparation for Graph Databases.

Hybrid retrieval:

  • Learn how to combine the best from keyword and semantic retrieval to get the most accurate results.

RAG and Agentic RAG.

RAG and Agentic RAG.

Data Preprocessing:

  • Learn data clean data before computing Embeddings.

  • Different chunking strategies.

  • Extracting useful metadata to be stored next to the embeddings.

  • Advanced techniques like Contextual Embeddings.

Data Retrieval, Generation and Reranking:

  • Experiment with amount of data being retrieved.

  • Query rewriting strategies.

  • Prompting for Generation with retrieved Context.

  • Learn how reranking of retrieved results can improve the accuracy of retrieval in your RAG and Agentic RAG systems.




MCP:

  • Agentic RAG is where MCP starts to play a role, you can implement different data sources behind MCP Servers. By doing so you decouple the domain responsibility of the data owner.


LLM Orchestration Frameworks:

  • You don’t need to rush with choosing Orchestration Framework, most of them hide the low level implementation from you and you would be better off starting out without any Framework whatsoever and using light wrappers like Instructor instead.

  • Once you want to pick up and Orchestrator, I would go for the popular ones because that is what you run into in the real world:

    • LangChain/LangGraph.

    • CrewAI.

    • LlamaIndex

    • Test out Agent SDKs of Hyper-scalers and AI Labs.

AI Agents.

AI Agents.

AI Agent and Multi-Agent Design Patterns:

  • ReAct.

  • Task Decomposition.

  • Reflexion.

  • Planner-Executor.

  • Critic-Actor.

  • Hierarchical.

  • Collaborative.

Memory:

  • Learn about Long and Short-Term memory in Agentic Systems and how to implement it in real world.




  • Try out mem0 - the leading Framework in the industry for managing memory. It now also has an MCP server that you can plug into your agents.

Human in or on the loop:

  • Learn hoe to delegate certain actions back to humans if the Agent is not capable to solve the problem or the problem is too sensitive.

  • Human in the loop - a human is always responsible for confirming or performing certain actions.

  • Human on the loop - the Agent decides if human intervention is needed.

A2A, ACP, etc.:

  • Start learning Agent Communication Protocols like A2A by google or ACP by IBM.

  • There are more Protocols popping out each week, but the idea is the same.

  • Internet of Agents is becoming a real thing. Agents are implemented by different companies or teams and they will need to be able to communicate with each other in a distributed fashion.

Agent Orchestration Frameworks:

  • Put more focus on Agent Orchestration Frameworks defined in the previous section.



Infrastructure.

Infrastructure.

Kubernetes:

  • Have at least basic understanding of Docker and Kubernetes.

  • If your current company does not use K8s, it is more likely you will run into the one that does use it rather than the opposite.

Cloud Services:

  • Each of the major cloud providers have their own set of services meant to help AI builders:

    • Azure AI Studio.

    • Google Vertex AI.

    • AWS Bedrock.

CI/CD:

  • Learn how to implement Evaluation checks into your CI/CD pipelines.

  • Understand how Unit Eval Tests are different from Regression Eval Tests.

  • Load test your applications.

Model Routing:

  • Learn how to implement Model fallback strategies to make your

  • Try tools like liteLLM, Orq or Martian.

LLM Deployment:

  • Learn basics of LLM deployment Frameworks like vLLM.

  • Don’t focus too much on this as it would be a rare case that you would need to deploy your own models in real world.

Observability and Evaluation.

Observability and Evaluation.

AI Agent Instrumentation:

  • Learn what SDKs exist for instrumenting Agentic applications, some examples:

    • Langsmith SDK.

    • Opik SDK.

    • Openllmetry.

  • Learn Multi-Agent system Instrumentation. How do we connect traces from multiple agents into a single thread.

  • You can also dig deeper into OpenTelemetry because most of the modern LLM Instrumentation SDKs are built on top of it.

Observability Platforms:

  • There are many Observability platforms available off the shelf, but you nee to learn the fundamentals of LLM Observability:

    • Traces and Spans.

    • Evaluation datasets.

    • Experimenting with changes to your application.

    • Sampling Traces.

    • Prompt versioning and monitoring.

    • Alerting.

    • Feedback collection.

    • Annotation.

Evaluation Techniques:

  • Understand the costs associated with LLM-as-a-judge based evaluations:

    • Latency related.

    • Monetary related.

  • Know in which step of the pipeline you should be running evaluations to get most out of it. You will not be able to evaluate every run in production due to cost constraints.

  • Learn alternatives to LLM based evaluation:

    • Rule based.

    • Regex based.

    • Regular Statistical measures.

Recently, I wrote a piece on building and evolving your Agentic Systems. The ideas I put out are very tightly connected with being able to Observe and Evaluate your systems as they are being built out. Read more here:




Security.

Security.

Guardrails:

  • Learn how to guardrail inputs to and outputs from the LLM calls.

  • Different strategies:

    • LLM based checks.

    • Deterministic rules (e.g. Regex based).

  • Try out tools like GuardrailsAI.

Testing LLM based applications:

  • Learn how to test the security of your applications.

  • Try to break your own Guardrails and jailbreak from system prompt instructions.

  • Performing advanced Red Teaming to test emerging attack strategies and vectors.

Looking Forward.

The future development of Agents will be an interesting area to observe. A lot of successful startups are most likely to succeed due to having one of the following:

  • Distribution.

  • Good UX.

  • Real competitive motes, like physical products. Here is where robotics comes into play.

Looking Forward Elements.

Voice, Vision and Robotics:

  • An interesting blend of capabilities that would allow a physical machine to interact with the world. The areas that I am looking forward to are:

    • On-device Agents.

    • Extreme Quantisation techniques.

    • Foundation Models tuned specifically for robotics purposes.

Automated Prompt Engineering:

  • New techniques are emerging that allow you to perform automated Prompt Engineering given that you have good test datasets ready for evaluation purposes.

  • Play around with frameworks like DsPy or AdalFlow.

Summary.

The skillset requirements for AI Engineers are becoming larger every month. The truth is that in your day-to-day you will only need a subset of it.

You should always start with your immediate challenges and adapt the roadmap accordingly.

However, don’t forget to look back and learn the fundamental techniques that power more advanced systems. In many cases these fundamentals are hidden behind layers of abstraction.

AI Engineering Roadmap.

Happy building!

My Profile

My photo
can be reached at 09916017317