CISSP

1: Security and Risk Management
2: Asset Security
3: Security Architecture and Engineering
4: Communication and Network Security
5: Identity and Access Management
6: Security Assessment and Testing
7: Security Operations
8: Software Development Security


8: Software Development Security



#1 Q:What's CMMI?

A: Capability Maturity Model Integration for development is a comprehensive integrated set of guidelines for developing products and software. It addresses the different phases of a software development life cycle, including concept definition, requirements analysis, design, development, integration, installation, operations, and maintenance and what should happen in each phase. The model describes procedures, principles, and practices that underlie software development process maturity. This model was developed to help software vendors improve their development processes by providing an evolutionary path from an ad hoc “fly by the seat of your pants” approach to a more disciplined and repeatable method that improves software quality, reduces the life cycle of development, provides better project management capabilities, allows for milestones to be created and met in a timely manner, and takes a more proactive approach than the less effective reactive approach.


#2 Q:Do you know about SDLC? Prove it.

A: The system development life cycle addresses how a system should be developed and maintained throughout its life cycle and does not entail process improvement. Each system has its own life cycle, which is made up of the following phases: initiation, acquisition/development, implementation, operation/maintenance, and disposal. A system development life cycle is different from a software development life cycle, even though they are commonly confused. The industry as a whole is starting to differentiate between system and software life-cycle processes because at a certain point of granularity, the manner in which a computer system is dealt with is different from how a piece of software is dealt with. A computer system should be installed properly, tested, patched, scanned continuously for vulnerabilities, monitored, and replaced when needed. A piece of software should be designed, coded, tested, documented, released, and maintained.


#3 Q:Does ISO/IEC 27002 provide guidance on how to create standardized development proceduresfor a team of programmers?

A: The focus of ISO/IEC 27002 is how to build a security program within an organization. ISO/IEC 27002 is an international standard created by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) that outlines how to create and maintain an organizational information security management system (ISMS). While ISO/IEC 27002 has a section that deals with information systems acquisition, development, and maintenance, it does not provide a process improvement model for software development.


#4 Q:What's the (C&A) process?

A: Oh snap! You mean certification and accreditation? C&A procedures are commonly carried out within government and military environments to ensure that systems and software are providing the necessary functionality and security to support critical missions. The certification process is the technical testing of a system. Established verification procedures are followed to ensure the effectiveness of the system and its security controls. Accreditation is the formal authorization given by management to allow a system to operate in a specific environment. The accreditation decision is based upon the results of the certification process


#5 Q:Database software should implement thecharacteristics of the ACID test:

A: Atomicity
Divides transactions into units of work and ensures that all modifications take effect or none takes effect. Either the changes are committed or the database is rolled back.
Consistency
A transaction must follow the integrity policy developed for that particular database and ensure all data is consistent in the different databases.
Isolation
Transactions execute in isolation until completed, without interacting with other transactions. The results of the modification are not available until the transaction is completed.
Durability
Once the transaction is verified as accurate on all systems, it is committed, and the databases cannot be rolled back.
The term “atomic” means that the units of a transaction will occur together or not at all, thereby ensuring that if one operation fails, the others will not be carried out and corrupt the data in the database.



#6 Q:When is Online transaction processing (OLTP) used?

A: Online transaction processing (OLTP) is used when databases are clustered to provide high fault tolerance and performance. It provides mechanisms to watch for and deal with problems when they occur. For example, if a process stops functioning, the monitor mechanisms within OLTP can detect this and attempt to restart the process. If the process cannot be restarted, then the transaction taking place will be rolled back to ensure no data is corrupted or that only part of a transaction happens. OLTP records transactions as they occur (in real time), which usually updates more than one database in a distributed environment.


#7 Q: Databases are commonly used by many different applications simultaneously andmany users interacting with them at one time. That being said, what is concurrency?

A: Concurrency means that different processes (applications and users) are accessing the database at the same time. If this is not controlled properly, the processes can overwrite each other’s data or cause deadlock situations. The negative result of concurrency problems is the reduction of the integrity of the data held within the database. Database integrity is provided by concurrency protection mechanisms. One concurrency control is locking, which prevents users from accessing and modifying data being used by someone else.


#8 Q:What's normalization?

A: Normalization is a process that eliminates redundancy, organizes data efficiently, reduces the potential for anomalies during data operations, and improves data consistency within databases. It is a systematic way of ensuring that a database structure is designed properly to be free of certain undesirable characteristics— insertion, update, and deletion anomalies—that could lead to a loss of data integrity.


#9 Q:Is the schema of a database system its structure described in aformal language?

A: It is! In a relational database, the schema defines the tables, the fields, relationships, views, indexes, procedures, queues, database links, directories, and so on. The schema describes the database and its structure, but not the data that will live within that database itself. This is similar to a blueprint of a house. The blueprint can state that there will be four rooms, six doors, 12 windows, and so on, without describing the people who will live in the house


#10 Q:What is the difference between object-oriented and relational databases?

A: In an object-oriented database, objects are instantiated when needed, and the data and procedure (called method) go with the object when it is requested. This differs from a relational database, in which the application uses its own procedures to obtain and process data when retrieved from the database.


#11 Q:Of what should subjects accessing a hierarchical database have knowledege?

A: Subjects accessing a hierarchical database must have knowledge of the access path in order to access data. In the hierarchical database model, records and fields are related in a logical tree structure. Parents can have one child, many children, or no children. The tree structure contains branches, and each branch has a number of data fields. To access data, the application must know which branch to start with and which route to take through each layer until the data is reached.


#12 Q:Explain unit, acceptance, regression and integration testing.

A: + Unit testing involves testing an individual component in a controlled environment to validate data structure, logic, and boundary conditions. After a programmer develops a component, it is tested with several different input values and in many different situations. Unit testing can start early in development and usually continues throughout the development phase. One of the benefits of unit testing is finding problems early in the development cycle, when it is easier and less expensive to make changes to individual units.
+ Acceptance testing is carried out to ensure that the code meets customer requirements. This testing is for part or all of the application, but not commonly one individual component.
+ Regression testing refers to the retesting of a system after a change has taken place to ensure its functionality, performance, and protection. Essentially, regression testing is done to identify bugs that have caused functionality to stop working as intended as a result of program changes. It is not unusual for developers to fix one problem, only to inadvertently create a new problem, or for the new fix to break a fix to an old problem. Regression testing may include checking previously fixed bugs to make sure they have not re-emerged and rerunning previous tests.
+ Integration testing involves verifying that components work together as outlined in design specifications. After unit testing, the individual components or units are combined and tested together to verify that they meet functional, performance, and reliability requirements.



#13 Q:Can you describe component-based systemdevelopment method?

A: Sure! Component-based development involves the use of independent and standardized modules. Each standard module consists of a functional algorithm or instruction set and is provided with interfaces to communicate with each other. Component- based development adds reusability and pluggable functionality into programs, and is widely used in modern programming to augment program coherence and substantially reduce software maintenance costs. A common example of these modules is “objects” that are frequently used in object-oriented programming.


#14 Q:What is the role of the Java Virtual Machine in theexecution of Java applets?

A: Java is an object-oriented, platform-independent programming language. It is employed as a full-fledged programming language and is used to write complete programs and short programs, called applets, which run in a user’s browser. Java is platform independent because it creates intermediate code, bytecode, which is not processor specific. The Java Virtual Machine (JVM) then converts the bytecode into machine-level code that the processor on the particular system can understand. It does not convert the source code into bytecode—a Java compiler does that.


#15 Q:Remind me about extreme programming.

A: Extreme programming is a methodology that is generally implemented in scenarios requiring rapid adaptations to changing client requirements. Extreme programming emphasizes client feedback to evaluate project outcomes and to analyze project domains that may require further attention. The coding principle of extreme programming throws out the traditional long-term planning carried out for code reuse and instead focuses on creating simple code optimized for the contemporary assignment.


#16 Q:What is a tunneling virus?

A: A tunneling virus—attempts to install itself under an antimalware program. When the antimalware conducts its health check on critical files, file sizes, modification dates, etc., it makes a request to the operating system to gather this information. If the virus can put itself between the antimalware and the operating system, then when the antimalware sends out a system call for this type of information, the tunneling virus can intercept the call and respond with information that indicates the system is free of virus infections.


#17 Q:There are three main types of integrity services: semantic,referential, and entity.

A: Entity integrity guarantees that the tuples are uniquely identified by primary key values. A tuple is a row in a two-dimensional database. A primary key is a value in the corresponding column that makes each row unique. For the sake of entity integrity, every tuple must contain one primary key. If a tuple does not have a primary key, it cannot be referenced by the database.
Referential integrity refers to all foreign keys referencing existing primary keys. There should be a mechanism in place that ensures that no foreign key contains a reference to a primary key of a nonexistent record or a null value. This type of integrity control ensures that the relationships between the different tables are working and can properly communicate to each other.
Semantic integrity mechanism ensures that structural and semantic rules of a database are enforced. These rules pertain to data types, logical values, uniqueness constraints, and operations that could adversely affect the structure of the database



#18 Q:When a module is described as having high cohesion and low coupling, is that a good thing?

A: It's great! Cohesion reflects how many different types of tasks a module can carry out. High cohesion means that the module carries out one basic task (such as subtraction of values) or several tasks that are very similar (such as subtraction, addition, multiplication). The higher the cohesion, the easier it is to update or modify and not affect the other modules that interact with it. This also means the module is easier to reuse and maintain because it is more straightforward when compared to a module with low cohesion. Coupling is a measurement that indicates how much interaction one module requires to carry out its tasks. If a module has low or loose coupling, this means the module does not need to communicate with many other modules to carry out its job. These modules are easier to understand and easier to reuse than those that depend upon many other modules to carry out their tasks. It is also easier to make changes to these modules without affecting many modules around them.


#19 Q:Is there anything better than Remote Procedure Calls?

A: The Simple Object Access Protocol (SOAP) was created to use instead of Remote Procedure Calls (RPCs) to allow applications to exchange information over the Internet. SOAP was created to overcome the compatibility and security issues that RPCs introduced when trying to enable communication between objects of different applications over the Internet. SOAP is an XML-based protocol that encodes messages in a web service setup. It allows programs running on different operating systems to communicate over web-based communication methods. : HTTP was not designed to specifically work with RPCs, but SOAP was designed to work with HTTP. SOAP actually defines an XML schema or a structure of how communication is going to take place. The SOAP XML schema defines how objects communicate directly. One advantage of SOAP is that the program calls will most likely get through firewalls since HTTP communication is commonly allowed. This helps ensure that the client/server model is not broken by getting denied by a firewall in between the communicating entities.


#20 Q:What's going on with fourth-generation programming?

A: The use of heuristics in fourth-generation programming languages drastically reduces the programming effort and the possibility of errors in code. The most remarkable aspect of fourth-generation languages is that the amount of manual coding required to perform a specific task may be ten times less than for the same task on a third-generation language.


#21 Q: Are third-generation very resource intensive when compared to the second-generation programming languages?

A: Third-generation programming languages are easier to work with compared to earlier languages because their syntax is similar to human languages. This reduces program development time and allows for simplified and swift debugging. However, these languages can be very resource intensive when compared to the second- generation programming languages. By introducing symbols to represent complicated binary codes, second-generation programming languages reduced programming and debugging times. Unfortunately, these languages required extensive knowledge of machine architecture, and the programs that are written in it are hardware specific.


#22 Q:Is threat modeling a good first step for developers to take to identify the security controls that should becoded into a software project?

A: Threat modeling is a systematic approach used to understand how different threats could be realized and how a successful compromise could take place. A threat model is created to define a set of possible attacks that can take place so the necessary countermeasures can be identified and implemented. Through the use of a threat model, the software team can identify and rate threats. Rating the threats based upon the probability of exploitation and the associated impact of each exploitation allows the team to focus on the threats that present the greatest risk. When using threat modeling in software development, the process starts at the design phase and should continue in an iterative process through each phase of the software’s life cycle. Different software development threat modeling approaches exist, but they have many of the same steps, including identifying assets, trust boundaries, data flows, entry points, privilege code, etc. This approach also includes building attack trees, which represent the goals of each attack and the attack methodologies. The output of all of these steps is then reviewed and security controls selected and coded into the software.


#23 Q:Is regression testing security-focused?

A: Regression testing is a type of test that is carried out to identify software bugs that exist after changes have taken place. The goal of regression testing is to ensure that changes that have taken place do not introduce new faults. Testers need to figure out if a change to one part of a software program will affect other parts of the software. A software regression is a bug (flaw) that makes a feature stop working after a change (e.g., patch applied, software upgrade) takes place. A software performance regression is a fault that does not cause the feature to stop working, but the performance of the function is degraded. Regression testing is not security focused and is not used with the goals of identifying vulnerabilities


#24 Q:What is attack surface analysis?

A: Surface analysis is used to map out the parts of a software program that need to be reviewed and tested for vulnerabilities. An attack surface consists of the components that are available to be used by an attacker against the software itself. The attack surface is a sum of the different attack vectors that can be used by an unauthorized user to compromise the system. The more attack surface that is available to attackers, the more they have to work with and use against the software itself. Securing software commonly includes reducing the attack surface and applying defense-in-depth to the portions of the software that cannot have their surface reduced. There is a recursive relationship between an attack surface analysis and threat modeling. When there are changes to an attack surface, threat modeling should take place to identify the new threats that will need to be dealt with. So an attack surface analysis charts out what areas need to be analyzed, and threat modeling allows the developers to walk through attack scenarios to determine the reality of each identified threat


#25 Q:How can one describe object-oriented programming deferred commitment?

A: Deferred commitment means that the internal components of an object can be refined without changing other parts of the system. Non-object-oriented programming applications are written as monolithic entities. This means an application is just one big pile of code. If you need to change something in this pile, you would need to go through the whole program’s logic functions to figure out what your one change is going to break. If you choose to write your program in an object-oriented language, you don’t have one monolithic application, but an application that is made up of smaller components (objects). If you need to make changes or updates to some functionality in your application, you can just change the code within the class that creates the object carrying out that functionality and not worry about everything else the program actually carries out. Q: A: Q: A: Q: A: Q: A: