Thursday, September 2, 2021

Security Test Checklist - Cheatsheet

As part of engineering team, when specially we are dealing with scale and playing a role towards quality of end product we ship to outside world, it becomes quite important to make sure we looked into from security perspective for all the deliverables.

Please follow below checklist as part of your regular deliverables

  • Broken Object Level Authorization

    • Let’s say a user generates a document with ID=322. They should only be allowed access to that document. If you specify ID=109 or some other ID, the service should return the 403 (Forbidden) error. To test this issue, what parameters can you experiment with? You could pass any ID in the URL or as part of Query parameters or Body (in XML or JSON). Try changing them to see what the service returns to you.

  • Broken User Authentication

    • Here, you can test whether session token gets reassigned after each successful login procedure or after the access level gets escalated in the application. In case your application removes or somehow changes the session token, check to see whether it returns a 401 error. We must not allow the possibility of predicting the session token for each next session. It should be as random as possible.

  • Excessive Data Exposure

    • For example, you have an interface that displays three fields: First Name, Position, Email Address, and Photo. However, if you look at the API response, you may find more data, including some sensitive data like Birth Date or Home Address. The second type of Excessive Data Exposure occurs when UI data and API data are both returned correctly, but parameters are filtered on the front end and are not verified in any way on the back end. You may be able to specify in the request what data you need, but the back-end does not check whether you really have permission to access that data.

  • Lack of Resources & Rate Limiting

    • API should not send more than N requests per second. However, this strategy is not quite correct. If your client generates more traffic than another client, your API should be stable for all clients.

      This can be resolved using special status codes, for example, 429 (Too Many Requests). Using this status code, you can implement some form of Rate Limiting. There are also special proprietary headers. For example, GitHub uses its X-RateLimit-*. These headers help regulate how many requests the client can send during a specific unit of time.

    • The second scenario is related to the fact that you may not have enough parameter checks in the request. Suppose you have an application that returns a list of user types like size=10. What happens if an attacker changes this to 200000? Can the application cope with such a large request?

  • Broken Function Level Authorization

    • This is concerned with vertical levels of authorization —the user attempting to gain more access rights than allowed. For example, a regular user trying to become an admin. To find this vulnerability, you must first understand how various roles and objects in the application are connected. Secondly, you must clearly understand the access matrix implemented in the application.

  • Mass Assignment

    • Avoid providing convenient mass assignment functions (when assigning parameters in bulk).

  • Security Misconfiguration

    • What can you test here? First of all, unnecessary HTTP methods must be disabled on the server. Do not show any unnecessary user errors at all. Do not pass technical details of the error to the client. If your application uses Cross-Origin Resource Sharing (CORS), that is, if it allows another application from a different domain to access your application’s cookies, then these headers must be appropriately configured to avoid additional vulnerabilities. Any access to internal files must also be disabled.

  • Injections

    • In my opinion, modern frameworks, modern development methods, and architectural patterns block us from the most primitive SQL or XSS injections. For example, you can use the object-relational mapping model to avoid SQL injection. This does not mean that you need to forget about injections at all. Such problems are still possible throughout a huge number of old sites and systems. Besides XSS and SQL, you should look for XML injections, JSON injections, and so on.

  • Improper Assets Management

    • CI/CD pipelines have access to various secrets and confidential data, such as accounts used to sign the code. Ensure you do not leave hard-coded secrets in the code and don’t “commit” them to the repository, no matter whether it is public or private.

  • Insufficient Logging & Monitoring

    • The main idea here is that whatever happens to your application, you must be sure that you can track it. You should always have logs that show precisely what the attacker was trying to do. Also, have systems in place to identify suspicious traffic, and so on.Also we must check for secrets/credentials/confidential info in log. Have seen this in some cases where we log username/password of databases.

NFR Checklist - Cheatsheet

As a test engineer, our core responsibility to make sure we go through NFR checklist for each and every ticket we test and we ship any feature or change to production. Resiliency testing play a key role in microservice architecture. Let’s work towards that actively and build our system resilient.

Software resilience testing is a method of software testing that focuses on ensuring that applications will perform well in real-life or chaotic conditions. In other words, it tests an application’s resiliency, or ability to withstand stressful or challenging factors. Resilience testing is one part of non-functional software testing that also includes compliance, endurance, load and recovery testing. This form of testing is sometimes also referred to as software resilience engineering, application resilience testing or chaos engineering.

Since failures can never be avoided, resilience testing ensures that software can continue performing core functions and avoid data loss even when under stress. Especially as customer expectations are becoming higher and downtime can be detrimental to the success of an organization, it is crucial to minimize disruptions and be prepared for unwanted scenarios. Resilience testing can be considered one part of an organization’s business continuity plan.

 

Please follow below checklist as part of your regular testing

  • Logging

    • Add appropriate logging towards any feature and any changes so we can debug any issue anytime later.

    • At the same time avoid adding unnecessary logging which are not needed for the changes.

    • Use INFO, WARNING, DEBUG and ERROR cautiously in the logging.

  • Events in CT

    • Add CT events wherever it’s quite necessary to collect actions about users so we can take appropriate action based on user actions.

  • SPOF (Single Point of Failure)

    • While building feature get an overall understanding of e2e flow and figure it out if this component or service fails, whole system or flow will go down along with it.

  • Security (Credential Mgmt)

  • Error Handling

    • Add appropriate logging for any kind of error which can happen towards feature or any change.

    • Test it out with diff set of data and how error handled at the API end apart from expected set of data.

  • Timeouts

    • Timeouts will help you fail fast if any of your downstream services does not reply back within, say 1ms.

    • It helps to prevent Cascading failures.

  • Retries

    • Retries can help reduce recovery time. They are very effective when
      dealing with intermittent failures.

    • Retries works well in conjunction with timeouts, when you timeout you
      retry the request.

  • Fallbacks

    • When there are faults in your systems, choose to use alternative
      mechanisms to respond with a degraded response instead of failing
      completely.

  • Circuit Breaker

    • Circuit breakers are used in households to prevent sudden surge in current
      preventing house from burning down. These trip the circuit and stop flow of current.

    • This same concept could be applied to our distributed systems wherein you stop making calls to downstream services when you know that the system is unhealthy and failing and allow it to recover.

    • Circuit breakers are required at integration points, help preventing cascading
      failures allowing the failing service to recover.

  • Performance testing

    • Do perf testing of any new API we build to get benchmark of the same we can go as per the expectations and can track accordingly going forward.

    • Do perf testing of existing APIs if there are changes around it to make sure we didn’t impact existing benchmark results.

  • Failure injection testing

    • Test your services via inject faults at integration points to verify how resilient is your service and entire system along with it.

  • Health Check

    • Add health check for all the services and make sure it’s up all the time.

My Profile

My photo
can be reached at 09916017317