Software permeates every aspect of our lives—it’s on our computers; in our smartphones and tablets, our watches, the devices we touch, our TVs, when we clock in for work, and when we settle down to relax.
Think about that—would hundreds of millions of consumers really depend on that software so cavalierly if they knew how vulnerable those ones and zeros left their sensitive data?
Cybercrime ballooned to a $1 trillion industry in 2020 according to the Washington Post. Huge amounts of consumer money, as well as even more money in legal fallout and lost revenue for consumers and vulnerable organizations, are at stake in the creation of a secure software development lifecycle. The average cost of cybercrime to a small company that falls victim tips the scales at $200,000 according to CNBC, a financial blow many small companies don’t recover from.
Companies considering custom software solutions need to get serious about secure software development—early in the process. Failing to do so can cost a company money and reputation that they can scarcely afford to lose. Here are four points to keep in mind when creating a secure software development solution.
1. Identify and understand the project’s security risks going in.
Prior preparation is planning for success. It’s impossible to prepare for every possible security risk, but the canon of software security is choked to the gills with known security risks and vulnerabilities.
Take the time to research the vulnerabilities associated with your category of software and plan a secure software development solution around reducing the risks associated with those vulnerabilities. According to the Open Web Application Security Project (OWASP), common software security risks to have on your radar from the outset include:
- Injection. Injection is penetrating a software solution by exploiting a bug or error. Injections come in many different forms, but they all boil down to gaining unauthorized access to systems, subsystems, and data.
- Weak Authentication. Weak authentication procedures are another way that cybercriminals can gain access to systems and data they have no right to. Authentication involves the credentials and passwords that authorized personnel use to gain access to the software.
- Weak Session Management. A “session” is the period of time an authorized user has access to the site after the initial authentication. Weak session management might include lack of session security or timing-out of sessions for re-authentication.
- Cross-Site Scripting. Cross-site scripting (XSS) is a cyberattack in which a hacker tries to inject malicious script into the program runtime in an attempt to bypass security and access controls. Common with web applications, it could be used to deface the site, redirect users to attack sites, or compromise sessions and authentication.
- Insecure Direct Object References. In an insecure direct object references (IDOR) attack, a user tries to access files within the application directly.
- Bad Security Configurations. Outdated or misconfigured security settings could leave your software solution vulnerable to a variety of attack vectors.
- Exposure of Sensitive Data. Improper encryption of data like customer information and payment card information can lead to serious data breaches and loss of public trust in your brand.
- Lack of Function-Level Access Control. Function-level access control involves validating permissions to access certain functions. Failure to control access to sensitive functions can lead to cybercriminals tripping default access to those functions.
- Cross-Site Request Forgery. This kind of attack hijacks a user’s browser during a session to send a forged HTTP request, delivering session information without the need for validation.
- Vulnerable Components. Software is rarely developed from scratch. Developers use open-source code, APIs, and modular components to construct something new from older pieces. Some of these components may have known vulnerabilities that you must be cognizant of and guard against.
- Unvalidated Forwards and Redirects. In this attack, users are redirected from the intended web application to a malicious website or application sometimes masquerading as a legitimate app.
2. Constantly update your team on security best practices.
Whether the software solution is to be developed in-house or by a third-party provider, the team directly involved in the development must be kept up-to-date on the software security best practices you expect them to adhere to.
Be proactive about this. Even if your team has more expertise than you, don’t assume that you are on the same page on a process as sensitive as secure software development. Take the time to get on the same page about secure software development best practices like:
- Continuous Documentation. Documentation of the development process helps security personnel track bugs and errors to their source and fix them before they become vulnerable to attack. Automate the documentation process if possible. Documentation may be necessary for quality assurance and regulatory compliance.
- Training of Personnel. The development team needs to plan for the eventual handoff of the software solution to the IT personnel who will maintain it after it goes into service. Prepare for a software security training cycle that includes onboarding, reinforcement, and updates to counter new threats.
- Post-Launch Security. Software security is an ongoing task. Your software security plan shouldn’t be limited to the development cycle. Start early by putting together a software security plan to monitor for new threats, patch vulnerabilities, and keep security training up-to-date.
3. Test, test, and retest.
Secure software development is not “Scout’s Honor.” Nothing can be judged secure until it has been tested. In the case of secure software development, nothing is secure until it has been proved secure through testing.
And not just a little testing, but a lot of testing. Repeatedly—at regular intervals and when new security threats emerge. The security testing process for custom software solutions really never ends.
What kind of testing are we talking about? Much of the heavy lifting (though not all) can be shouldered by automated tools, which lightens the load in terms of the time and resources that it takes to check the code for errors and potential vulnerabilities. Automated software security testing tools include:
- SAST Tools. SAST (static application security testing) tools automate the process of static code analysis—checking the code for errors while the program is shut down and not running (i.e. its static state). Different tools can be used to test different coding languages, checking the code for deviations from a set of accepted practices. SAST tools tend to produce a list of possible vulnerabilities, which developers and security experts must then manually check.
- DAST Tools. DAST (dynamic application security testing) tools test the application in its running (dynamic) state. DAST tools ping the application with unexpected inputs in an attempt to produce an error that a cybercriminal could exploit.
- IAST Tools. IAST (interactive application security testing) tools take DAST a step further. Rather than throwing inputs at the running code hoping to return an error, IAST tools actually scan the code while it is running, in search of errors that can’t be spotted in the static state. IAST tools cannot be used in a vacuum, but they have a record of returning fewer false positives than either DAST or SAST.
- Database Scanning Tools. Database scanning tools inspect oft-neglected software databases in search of potential vulnerabilities.
- Correlation Tools. Correlation tools help you make sense of the output of various tests. They compare the test results in search of matching results, in order to narrow down long security tool reports to the errors most likely to represent actual vulnerabilities.
Those are the automated tools. To further prove the integrity of your secure software solution, however, a human brain needs to get involved.
The brain you need is an “ethical hacker,” a software security specialist conversant in the techniques of malicious cybercriminals. Software developers may hire a third-party “ethical hacker” to perform a penetration test—a simulated attack where the ethical hacker attempts to breach the system and then provides a report of the vulnerabilities (s)he discovered.
4. Be vigilant about authentication and encryption.
While errors in the code can afford criminals access to your software, authentication and encryption are some of the most common sources of security failures. This is ironic because they are some of the most mundane avenues of vulnerability—not because they are easy to get right, but because they are far less exotic than errors in the code.
Authentication is the process of proving that you have the right to access software functions and features. Usernames and passwords are the primary tools of authentication, as well as permission structures that differentiate users from admins, etc.
Encryption, on the other hand, hides data from prying eyes. Encryption of sensitive data renders it into a kind of code, which can be decoded only by authorized users who have access to a “key” to decipher the code.
Any secure software solution must rigorously adhere to best practices for authentication and encryption. This back-to-basics approach will foil many attempted cyberattacks.
Conclusion
Secure software development is the responsibility of every developer. It doesn’t happen by accident. Through rigid adherence to software security best practices, constant testing and retesting, and strict attention to authentication and encryption, a secure software development solution can be created intentionally.