BSG Blog Berezha Security Group

Mobile App Security Testing: iOS and Android Pentest Guide

Your mobile app runs on devices you don’t control, in environments you can’t predict. That binary sitting on a user’s phone — with its local storage, hardcoded configuration, and network calls — is an entirely different attack surface from your web application. It demands a different testing approach.

In Q1 2026 alone, security providers blocked over 12 million malware-related attacks targeting mobile devices, with banking trojans and spyware leading the threat landscape. Meanwhile, regulatory pressure from PCI DSS 4.0, DORA, and HIPAA continues to push organisations toward mandatory mobile security assessments.

This guide covers what mobile app security testing involves, where iOS and Android apps typically fail, and how to decide between doing it yourself and hiring a professional pentester.

What is mobile app security testing?

Mobile app security testing is a structured evaluation of how your iOS or Android application handles data, authenticates users, communicates with backend services, and resists adversarial manipulation. It goes beyond web application testing because mobile apps introduce platform-specific risks: local data storage, inter-process communication, hardware sensor access, and the binary itself being available for reverse engineering.

A thorough mobile security assessment combines two approaches:

  • Static analysis examines the app binary, source code, and configuration files without running the application. It identifies hardcoded secrets, weak cryptographic implementations, insecure API endpoints, and misconfigurations in platform settings.
  • Dynamic analysis observes the running application to test authentication flows, intercept network traffic, manipulate runtime behaviour, and probe for business logic vulnerabilities that only appear during execution.

Neither approach alone is sufficient. Automated scanners catch the low-hanging fruit — hardcoded API keys, missing certificate pinning, known vulnerable libraries. But business logic flaws, authentication bypasses, and privilege escalation vulnerabilities require a skilled tester who understands your application’s context.

OWASP MASVS: the industry standard

The OWASP Mobile Application Security Verification Standard (MASVS) provides the definitive framework for mobile security testing. Updated regularly, MASVS organises requirements into categories that map directly to real-world attack patterns:

MASVS categoryWhat it coversWhy it matters
StorageData at rest, keychain/keystore usage, logs, backupsLeaked data from device backups or logs is a top breach vector
CryptoAlgorithm selection, key management, random number generationWeak crypto makes encrypted data functionally plaintext
AuthSession management, biometrics, token handlingBroken auth leads directly to account takeover
NetworkTLS configuration, certificate pinning, API securityMan-in-the-middle attacks intercept credentials and data
PlatformIPC, permissions, WebView security, deep linksPlatform misuse enables cross-app attacks and data leakage
Code qualityReverse engineering resistance, tampering detection, debuggingAn unprotected binary is an open invitation to attackers
PrivacyData collection, consent, tracking, PII handlingRegulatory non-compliance carries real financial penalties

MASVS defines two verification levels. Level 1 (L1) covers baseline security that every app should meet. Level 2 (L2) adds defence-in-depth requirements for apps handling sensitive data — financial transactions, health records, or authentication credentials. Most penetration testing engagements use L1 as a minimum and L2 for regulated industries.

Where iOS and Android apps typically fail

After hundreds of application security assessments, certain vulnerability patterns appear consistently across both platforms. Here are the findings we see most often.

Data storage vulnerabilities

The problem: Apps store sensitive data in locations accessible to other apps, device backups, or forensic analysis.

On Android, common mistakes include writing tokens or credentials to SharedPreferences in plaintext, logging sensitive data to Logcat, and failing to exclude files from automated backups. On iOS, storing secrets in UserDefaults instead of the Keychain, caching sensitive responses in URLCache, and leaving debug data in the app’s sandbox are frequent findings.

What to do: Use the platform’s secure storage — Android Keystore and iOS Keychain — for all secrets, tokens, and credentials. Disable automated backups for sensitive data directories. Strip all debug logging from release builds.

Insecure network communication

The problem: Apps transmit sensitive data over insecure channels or fail to verify server identity.

The most common finding is missing or improperly implemented certificate pinning. Without it, an attacker on the same network (think public Wi-Fi, compromised corporate network) can intercept all API traffic using a proxy with a self-signed certificate. Android’s Network Security Config and iOS’s App Transport Security (ATS) provide baseline protections, but many apps weaken these defaults for development convenience and ship those weakened configurations to production.

What to do: Implement certificate pinning against your API server’s certificate or public key. Use Android’s Network Security Config to enforce TLS and define pin sets. On iOS, ensure ATS is enabled without exceptions. Never ship debug proxy configurations.

Authentication and session management

The problem: Apps implement authentication logic on the client side, making it trivially bypassable.

Client-side biometric checks that don’t tie to server-side session validation are common in fintech apps. We frequently see apps where disabling a biometric prompt — through runtime manipulation or binary patching — grants full access. Token storage and refresh logic also appear frequently: long-lived tokens without rotation, refresh tokens stored alongside access tokens, and missing token revocation on logout.

What to do: Treat the client as untrusted. All authentication decisions must be validated server-side. Use short-lived access tokens with secure refresh flows. Bind biometric authentication to cryptographic operations in the secure enclave, not just a boolean check.

Hardcoded secrets and API keys

The problem: Developers embed API keys, encryption keys, or credentials directly in the application binary.

Because mobile app binaries are distributed to end users, anything in the binary is extractable. Decompiling an Android APK with jadx or disassembling an iOS binary with Hopper takes minutes. We routinely find AWS keys, Firebase secrets, third-party API tokens, and sometimes database credentials hardcoded in mobile apps.

What to do: Never embed secrets in mobile app binaries. Use server-side proxying for third-party API calls. If a client-side key is unavoidable (such as a Google Maps API key), restrict it by bundle ID, API scope, and usage quota.

Platform-specific misconfigurations

Android-specific findings:

  • Exported Activities, Services, and Broadcast Receivers that accept untrusted input
  • Insecure content providers exposing internal databases
  • WebView with JavaScript enabled and access to local files
  • Debuggable flag left enabled in release builds
  • Permissive intent filters that allow deep link hijacking

iOS-specific findings:

  • Custom URL schemes vulnerable to hijacking (Universal Links are safer)
  • Pasteboard data accessible to other apps
  • Improper handling of background screenshots (sensitive data visible in app switcher)
  • Keychain items with overly permissive access groups
  • Missing jailbreak detection for high-security apps

Tools for mobile app security testing

Static analysis

ToolPlatformUse case
MobSFBothAutomated static + dynamic analysis, great starting point
jadxAndroidDecompiles APK to readable Java source
apktoolAndroidDecodes resources, enables binary inspection
Hopper / GhidraiOSBinary disassembly and analysis
SemgrepBothCustom rules for source code security patterns

Dynamic analysis

ToolPlatformUse case
FridaBothRuntime instrumentation — bypass pinning, hook functions, modify behaviour
Burp Suite / OWASP ZAPBothHTTP(S) traffic interception and manipulation
objectionBothRuntime exploration built on Frida
DrozerAndroidIPC testing — exported components, content providers
Cycript / LLDBiOSRuntime manipulation and debugging

Getting started

MobSF is the best entry point for teams new to mobile security testing. It performs automated static and dynamic analysis and generates reports that highlight the most critical findings. But remember: automated tools typically find 30–40% of the vulnerabilities a skilled manual tester discovers. They excel at configuration issues and known patterns; they miss business logic flaws entirely.

When to hire a professional

Automated tools and internal testing can handle baseline security hygiene. But certain scenarios demand professional application security expertise:

  • Regulatory requirements — PCI DSS, HIPAA, SOC 2, and DORA audits often require third-party penetration testing reports from qualified assessors.
  • High-value targets — Fintech, healthcare, and apps handling payment data face motivated, skilled attackers. The testing needs to match the threat.
  • Pre-launch assurance — A professional pentest before your app hits the App Store or Google Play catches vulnerabilities that would be expensive to fix post-release.
  • After a breach or incident — Independent assessment provides objective findings and credible remediation evidence for regulators and customers.
  • Complex architectures — Apps with multiple backend integrations, custom authentication, or real-time communication protocols need testers who’ve seen these patterns before.

A professional mobile app pentest typically costs $8,000–$25,000 per platform, takes 2–3 weeks, and should include a detailed findings report with remediation guidance and a free retest to verify fixes. If your quote doesn’t include retesting, ask why.

Building mobile security into development

Testing at the end of the development cycle is better than not testing at all, but shifting security left is more effective and cheaper. Here’s a practical approach:

  1. Threat model during design — Before writing code, identify what data your app handles, where it flows, and what an attacker could gain. Focus testing resources on the highest-risk areas.

  2. Use platform security features — Android Keystore, iOS Keychain, Network Security Config, ATS. These exist because the platform vendors have already solved common security problems. Use them.

  3. Automate what you can — Run MobSF or Semgrep in CI/CD to catch regressions. Flag new uses of insecure APIs, hardcoded strings, and missing security configurations.

  4. Train your developers — Mobile developers who understand OWASP MASVS write more secure code from the start. Targeted secure coding training for mobile platforms reduces the volume and severity of findings in every subsequent pentest.

  5. Test before every major release — New features mean new attack surface. Schedule professional pentesting before major releases, and use automated tools for interim builds.

Frequently asked questions

What’s the difference between mobile app testing and web app testing?

Mobile app testing covers the binary on the user’s device — local storage, platform APIs, hardware access, reverse engineering — plus the backend APIs. Web app testing focuses only on server-side behaviour and browser-based interactions. A mobile app with a web API backend needs both types of testing for complete coverage.

How long does a mobile app pentest take?

Most mobile app penetration tests take 2–3 weeks per platform. Simple apps (few features, standard authentication) can be tested in 1–2 weeks. Complex apps with custom protocols, multiple user roles, and offline functionality may require 3–4 weeks. Testing both iOS and Android versions in parallel saves time but requires separate testing for each platform’s unique attack surface.

Do I need to test both iOS and Android separately?

Yes. Even if both apps share a common codebase (React Native, Flutter, Kotlin Multiplatform), the compiled binaries interact differently with each platform’s security features. Android’s content providers, broadcast receivers, and intent system create attack surface that doesn’t exist on iOS. iOS has its own unique risks with keychain configuration, URL scheme handling, and pasteboard behaviour. Testing one platform doesn’t cover the other.

Can automated scanners replace manual pentesting?

Automated scanners are useful for baseline checks — they’ll catch hardcoded secrets, missing certificate pinning, and known vulnerable libraries. But they can’t test business logic, evaluate authentication flows in context, or chain vulnerabilities together the way a skilled tester does. Industry data consistently shows that manual testing finds 2–3x more high-severity vulnerabilities than automated scanning alone. Use both: scanners for continuous monitoring, manual pentesting for depth.

Get your mobile app tested

BSG’s application security team has tested mobile apps across fintech, healthcare, SaaS, and e-commerce — finding the vulnerabilities that scanners miss. Every engagement includes a detailed findings report, clear remediation guidance, and a free retest to verify your fixes.

Get a free consultation — tell us about your app and we’ll scope the assessment within 24 hours.