Ref: Research methods for postgraduates by Tony Greenfiled,
A view of research:
Research, depending on your viewpoint, is:
Research, depending on your viewpoint, is:
- A quest for knowledge and understanding
- An interesting, and perhaps, useful experience
- A course for qualification
- A career
- A style of life
- An essential process for commercial success
- A way to improve human quality of life
- An ego boost for you
- A justification for funds for your department and its continued existence
Creativity: There are some methods of intellectual discovery, explained as following:
- Analogy: Look for a similarity between your problem and one for which the solution is known. Electrical circuits are envisioned as water flowing through tanks, pipes, pumps and valves; brain function is studied by comparison with computers; the more remote your analogy is from your problem, the more creative will be your solution.
- By parts: Break the problem into a series of sub-problems that you hope will be more amenable to solution.
- By random guesses: Edison used it extensively and brainstorming is a modern version of it.
- Generalise: If a specific problem is baffling, write a general version of it; an algebraic model leads to simplified solutions compared with tackling complicated arithmetic head on.
- Add: A difficult problem may be resolved by adding an auxiliary sub-problem.
- Subtract: Drop some of the complicating features of the original problem; this is a trick used in simulation to make it more tractable.
- Particularise: Look for a special case with a narrower set of conditions, such as tackling a two-dimensional example of a three-dimensional problem.
- Stretch or contract: Some problems are more tractable if their scale or the range of variables is altered
- Invert: Look at the problem from the opposite viewpoint; instead of 'When will this train arrive at Oxford?' ask 'When will Oxford arrive at this train?'.
- Restructure: In clinical studies we do not ask if a treatment will cure a disease, but will an inert treatment fail to cure the disease?
- The method of Pappus: Assume the problem is solved and calculate backwards.
- The method of Tertullus: Assume a solution is impossible and try to prove why.
- Part 1: Reviewing the field: Many research projects arise from a study of current thinking in a field. The research project follows from identifying a gap in the literature.
- Part 2: Theory building: In some ways, theory building is the most personal and creative part of the research process. Some people find it the most exciting and challenging part of the whole business. In some cases, data collection precedes theory building and, in others, it follows it.
- Part 3: Theory testing: The sort of theory testing we do will depend on our ambitions and claims for our theory. If we want to claim that our theory applies generally then we may want to use statistical methods (known as inferential statistics), which have been developed to enable us to make claims about whole populations from information about a sample from a population.
- Part 4: Reflecting and integrating: Reflection and integration is the last stage of the research journey. There may be many things on which you want to reflect: what you have learned about the process of research; what you could have done differently; what you have learned about yourself. However, there is one matter for reflection that is a crucial part of the research process itself. It will affect how your research is judged and the impact of your research. You must reflect on how your research findings relate to current thinking in the field of your research topic.
Some reasons for spending time and effort on a review of the field are explained as following:
- To identify gaps in current knowledge
- To avoid reinventing the wheel (at the very least this will save time and it can stop you from making the same mistakes as others);
- To carry on from where others have already reached (reviewing the field allows you to build on the platform of existing knowledge and ideas);
- To identify other people working in the same and related fields (they provide you with a researcher network, which is a valuable resource indeed);
- To increase your breadth of knowledge of the area in which your subject is located
- To identify the seminal works in your area
- To provide the intellectual context for your own work, (this will enable you to position your project in terms of related work)
- To identify opposing views
- To put your own work in perspective
- To provide evidence that you can access the previous significant work in an area
- To discover transferable information and ideas (information and insights that may be relevant to your own project)
- To discover transferable research methods (research methods that could be relevant to your own project)
A good practical question to ask yourself is: 'What are the implications of my research results for our understanding in this area?' The implications can take many forms. For example:
- You may have filled a gap in the literature
- You may have produced a possible solution to an identified problem in the field
- Your results may challenge accepted ideas in the field (some earlier statements in the literature may seem less plausible in the light of your findings)
- Some earlier statements in the literature may seem more plausible in the light of your findings
- Your work may help to clarify and specify the precise areas in which existing ideas apply and where they do not apply (it may help you to identify domains of application of those ideas)
- Your results may suggest a synthesis of existing ideas
- You may have provided a new perspective on existing ideas in the field
- Your work may suggest new methods for researching your topic
- Your results may suggest new ideas, perhaps some new lines of investigation in the field
- You may have generated new questions in the field
- There may be implications for further research
Students in the UK can consult Theses for a list of doctoral and masters theses accepted at UK universities since 1971, with bibliographic details and keywords for all and abstracts for the most recent work. The Web of Science (Web of Science) is a more general service covering a very wide range of journals and other databases.
Several excellent bibliographic programs are available:
- EndNote
- Papyrus
- Reference Manager
The main system that maintains e-mail lists in the UK is (E-mail lists), and you can browse that site to find lists relevant to your research, view the lists of members' names, and read the archives of messages.
============================================
============================================
Research Resources:
Lists of useful sites:
The following sites provide links to web pages covering specific areas of interest: a more general database of Internet resources covering subjects is maintained by BUBL, and is located at: General database of Internet resources.
Lists of useful sites:
The following sites provide links to web pages covering specific areas of interest: a more general database of Internet resources covering subjects is maintained by BUBL, and is located at: General database of Internet resources.
Free software:
The Internet is a useful place to find good quality free software for your computer. For example, the following word processing packages can be downloaded free:
Free tools for viewing documents (such as electronic journal articles and preprints) that have been created in a specific format include:
More specialised software packages are also available, for example the following statistics packages:
For details about other software available, see the following sites:
Online Journals:
Several journals now publish their articles on the Internet, as well as in paper format.
Job opportunities:
While studying for a postgraduate qualification, you need to be aware of current job opportunities in your area of research, so that you can apply in good time for a suitable post before the end of your studies. There are many websites that list current job opportunities as following:
==============================================
Strategy for Search:
While many students will be pursuing research that follows soon after their undergraduate academic studies, others may have found their ways into research through their professional occupations, such as history, engineering, law, medicine, social services, chemistry and human resources. The latter is called practitioner researchers. The two routes have contrasting approaches to literature searches. One seeks to describe a detailed background to the research area, showing the history of what has gone before in order to establish a wide context, and to support arguments for a new theory. It is intensive. The second seeks to discover current knowledge that will help to answer a well-defined question. It is extensive. This latter is reflected in present-day methods and teaching of evidence-based medicine and evidence-based management. Evidence-based practice requires the practitioner to obtain evidence of the up-to-date knowledge in the field.
While many students will be pursuing research that follows soon after their undergraduate academic studies, others may have found their ways into research through their professional occupations, such as history, engineering, law, medicine, social services, chemistry and human resources. The latter is called practitioner researchers. The two routes have contrasting approaches to literature searches. One seeks to describe a detailed background to the research area, showing the history of what has gone before in order to establish a wide context, and to support arguments for a new theory. It is intensive. The second seeks to discover current knowledge that will help to answer a well-defined question. It is extensive. This latter is reflected in present-day methods and teaching of evidence-based medicine and evidence-based management. Evidence-based practice requires the practitioner to obtain evidence of the up-to-date knowledge in the field.
The two approaches to literature searching are contrasted as following:
Literature search in higher education:
- Intensive or extensive search: Intensive search (drilling down)
- Relationship to academic disciplines: Intra-disciplinary
- Starting point of search: The state of current knowledge in the field
- Key question for search to answer: What are the gaps in the current literature on this topic?
Literature search in professional practice:
- Intensive or extensive search: Extensive search
- Relationship to academic disciplines: Inter-disciplinary and search outside of current disciplines
- Starting point of search: A problem or an opportunity
- Key question for search to answer: Who has ever encountered this problem, or some variant of it, and what possible solutions have been suggested or tried?
Professional practitioners often face problems for which their academic background and professional studies provide them with little or no familiarity. The search to establish 'what is already known about this problem' is not likely to involve drilling deeper within a known subject discipline. The search must be far and wide for where the problem, or some variation of it, has been encountered in other fields. A structured approach is needed. We describe here a recursive process for literature search by people researching in fields where they may have little or no familiarity with the literature. The elements of the strategy are:
- Focus on core journals and key authors as well as seminal articles
- Identify potentially significant works, journals and people by applying the principle of duplication of references
- Ensure convergence on the set of articles that are most relevant to the issue under investigation by applying the principle of diminishing returns to search
We focus here on just one form of literature to present our strategy: Journal articles. This is the most useful source of up-to-date thinking on a subject. Reasons for this include:
- Journal articles usually contain brief reviews of the relevant literature in addition to an account of the method, research findings and how the findings add to what was previously known.
- Academic journals are refereed and this provides some quality control.
- The journals are usually published several times each year so they are likely to be relatively up-to-date.
- Journals are the vehicles through which researchers have traditionally shared their findings with other researchers so they are published in forms designed to enable researchers to find them through a literature search. Thus, for example, they often contain keywords and abstracts designed to enable other researchers to find them in a literature search.
The extent to which you are clear about the theory at the beginning of your research, raises an important question concerning the design of your research project. This is whether your research should use the deductive approach, in which you develop a theory and hypothesis (or hypotheses) and design a research strategy to test the hypothesis, or the inductive approach in which you would collect data and develop theory as a result of your data analysis.
The search strategy is presented as a set of stages that will be most helpful to the practitioner researcher:
The search strategy is presented as a set of stages that will be most helpful to the practitioner researcher:
Stage 1. Search profile:
Summarize the starting point for the literature search in the form of a search profile. A search profile defines the scope of the search. It provides a preliminary answer to the following questions:
- What is the topic of this research?
- What is the research trying to find out? What is the research question?
- What is the aim(s) of the research investigation?
Producing a search profile will concentrate your mind and force you to clarify what is within the scope of your investigation and what lies outside of it. As the literature search proceeds, the scope of the research is likely to change and it is important to acknowledge that and not regard the search profile as a straitjacket. Its purpose is to provide sufficient clarification of the research topic and definition of the scope of the research to enable the process of the literature search to start.
Stage 2. Keywords:
What words best describe the subject matter of the research? Keywords can be words, combinations of words or short phrases. The keywords will be the way into the literature of the research topic. How many key words should you start with? The more you use the more likely you are to find your way into relevant literatures. The basic procedure here involves two steps:
- Generate as many plausible keywords as possible, and then
- Rank them in terms of their relevance
Stage 3. Starter references:
Use the highest ranked keywords to search for relevant journal articles. Collections of periodical/journal indexes come in a variety of forms: printed, CD-ROM or via the web. You can use an electronic database (CD-ROM or web) to search through years of journal indexes easily, rather than having to scan volume after volume manually. The database of journal indexes that you select will depend on the area of your research. This presupposes that you can already identify the subject discipline in which the issue is located. Keywords, on the other hand, enable you to search across subject disciplines. It is easy to start with a web-based book supplier such as Amazon.
Use the information in your search profile to focus your search. Most databases provide a keyword search facility. You may also have the opportunity to specify parameters, such as year(s) and whether you want to do a national or an international search. Focus your search at this stage to obtain a list of journal articles that will be of direct, rather than vague, relevance to the issue you wish to research. Once you have produced a list of journals, identify and obtain the ten articles that look most promising, judging by their titles and abstracts.
Stage 4. Seminal works, core journals and key people:
Identify works that have been particularly influential in the field, the seminal works, the journals that are most likely to publish articles on the topic, the core journals, and the people who are also working on the issue, the key people. Core journals may not all lie within the same field of study as categorised by traditional academic disciplines. For example, a search on 'managing professional change through action research' is likely to yield core journals covering a range of academic disciplines.
Use the references cited in the ten articles to identify:
- Articles and books that are mentioned more than once; these are the potential seminal articles
- Journals that are mentioned more than once; these are potential core journals for the topic of interest
- Authors who are mentioned more than once; these are likely to include the key people researching and writing about the topic of interest
This stage can be completed with various degrees of meticulousness. At one end of the scale you can inspect the references for duplication of published work, journals or authors respectively. At the other end of the scale, you can type all of the references listed in all of the articles into a document and then use cut-and-paste to produce an alphabetical list of titles of the works cited, names of the authors and titles of the journals. Duplications become conspicuous in this process. This may seem a time-consuming approach, but a significant proportion of the references will need to be keyed in at some stage, if only to provide a list of references for the final report of the research. A short cut would be a preliminary scan across all the references and highlight only those that look at all plausible and thereafter work only with the highlighted articles.
An intermediate position on the 'meticulousness' scale would be to key into a table just the names of the authors, the titles of the works cited, and the names of journals or publishers. Most commonly used word-processing packages allow rows to be moved around easily and this facilitates the rearrangement of rows to display the duplications.
Stage 5. Seminal articles:
Assemble the articles and other works, such as books and reports, you identified while finding the duplication.
Stage 6. Core journals:
Each of the core journals should publish an index of contents at regular intervals. Look at the indexes of contents of the core journals for the last few years to identify the most recent articles addressing the topic of the research, or closely related issues.
Stage 7. Key people:
Use a citation index to look for other published work by the authors you have identified as potential key people in the field. A citation index is a regularly updated index that lists works cited in later works. It includes a list of the sources from which the citations were gathered. Researchers use it to locate sources related by subject to a previously published work. For example, a search in a citation index for Dr Smith's work will reveal all the articles that have cited or referred to Dr Smith. This will enable you to identify what else the authors have written that is relevant to the topic in question. You may also wish to send some authors a note, perhaps by e-mail, to explain your interest in the topic. You could, for example, enclose a list of your top ten references to date and ask them to add any notable omissions from your list, including anything else that they have written on the topic. You may get back copies of work that is so new it has not yet been published.
Stage 8. Further iterations:
Stages 5, 6 and 7 will have generated additional references. Select, from them, a handful that look most promising and repeat the process from Stage 4 onwards to identify additional potential seminal works, core journals and key people. This process can then be repeated until no new works, journals or people are found. The principle of diminishing returns to search should ensure that the process will not continue indefinitely.
KEYWORDS:
Generate as many plausible keywords as possible and then rank the most plausible ones in terms of their relevance to the research topic. The following list of ways to generate plausible keywords is a selection of the ideas resulting from a brainstorm of MBA students on a creative problem-solving course on this theme:
- Use a dictionary
- Use a dictionary of synonyms and antonyms
- Use known academic journals/trade journals
- Use the contents/index pages of a relevant book
- Do an Internet/Intranet search
- Use the contents pages and bibliography of other dissertations
- Look at keywords in other people's published articles in journals to get an idea of what makes a good set of keywords
- Talk to practitioners
- Check for regularly body
- Use one keyword and Amazon.com
- Annual reports of organizations in the field
- Mindmaps/brainstorming
- Rich pictures
- Journal articles/references in articles
- Newspapers and other media
- Focus groups/action learning set
- Library catalogues
- Newsgroups
- Trade conferences
- Conference proceedings
When you need to search the literature in an unfamiliar field, or across unfamiliar fields, you need a structured approach to avoid the frustrations of a haphazard hunt.
Specimen search profile:
Topic
Development of professional doctorates in England during the 1990s
Research question
What is the rationale for the development of professional doctorates?
Aim of the investigation
Discover the thinking behind the development of professional doctorates in England in the 1990s.
Parameters
- Doctorate
- Doctor of
- Professional doctorate
- Practitioner research
- Taught doctorate
- Continuing professional development
- Work-based learning
===================================================
Creativity:
The creative web for Macro-projects:
The creative web for Macro-projects:
The creative process at the macro-level features four main conceptual spaces:
- Problem-definition space
- Methods space
- Solutions space
- Implementation space
- Problem-definition space: It consists of Creative problem finding and Preparation and immersion.
- Creative problem finding: Goal/objective setting - Identification of discrepancies, anomalies, "chaos", and intriguing ("impossible") problems - Initial idea prompters - Definition of problem.
- Preparation and immersion: Acquisition of database of knowledge (data and information) and skills - Planning: search for and collection of data and information
2. Methods space: Re-engineering exploration and generation/incubation -(Unexpected)synthesis/illumination
3. Solution space: Execution (experimentation) and testing - Evaluation and verification
4. Implementation space: Presentation, acceptance and/or implementation
The creative life space is considered as a set of external factors. The model contains a maximum of seven modules and covers basic creativity and innovation. Each module can be regarded as a task or sub-project. In practice, the modules in a particular space would flow into one another so that they could not be regarded as separate or distinct. All the modules in the basic creativity area (problem definition and methods spaces: modules 1-4), solution, and implementation spaces are interconnected into the creative web and their relationships are recursive. The creative web illustrates problem solving in space rather than in time.
Although we live in the constancy and flow of time, we solve problems spatially; the space may be physical and/or conceptual (abstract). In the creative web, as in jigsaw puzzles, there is no fixed path or set of modules to the solution of problems. What is important is that the problem solver at least visits the problem-definition space, the methods space, and the solution space.
In some creative problems, the problem solver may even start with the solutions space, for example by answering the questions: 'What am I going to create? What end product do I want?' This approach is applicable to, for example, the design of questionnaires when the output or desired results are first specified and then the corresponding questions are formulated. The creative web presents a general framework and can be used for a wide variety of creative projects. In research, the creative web can be used as a framework for structuring and programming the whole or selected modules of a research proposal. The creative micro-process can be used together with creativity strategies and tools, within any module. Breakthrough insights could occur before, during and after specific tasks.
Creativity, ideas management, and creative problem solving are central to innovative research. Creativity cannot be described adequately by a single definition or process. It mainly involves novelty, utility, and the process of problem solving. Although both goal-directed and organic approaches are involved in problem solving, the goal-directed approach or creative problem solving (CPS) more directly relates to postgraduate research. The suggested framework for creative problem solving is the creative web.
In the literature, many tools and techniques are offered to enhance creativity, ideas management, and problem solving. These tools and techniques can be synthesized in a conceptual framework that allows the use of all tools and techniques. In versatile thinking as a framework, tools and techniques of creativity, ideas management, and problem solving can be integrated. The tools of versatile thinking include the versatile matrix and map.
================================================
================================================
When to use creativity:
Creative (non-linear explorative) thinking is complementary to logical (linear critical) thinking. Each is one-half of total thinking and is a set of tools for mining, exploring and re-engineering knowledge as well as skills. King summarise his view of thinking, expression and problem solving in the acronym, BEAR: Bring Every Available Resource. In the BEAR strategy, creative and logical thinking exist on a continuum. You should be prepared to use any thinking strategy or tool to achieve your objective. When both creative and logical thinking are highly developed, internalized, and integrated in a philosophy of versatile thinking, the quality, quantity, breadth, depth, unexpectedness (surprise/novelty), and coherence (utility/value) of outputs in many domains would tremendously increase. Creative or versatile thinking is not a substitute for domain-specific knowledge and skills.
Creative (non-linear explorative) thinking is complementary to logical (linear critical) thinking. Each is one-half of total thinking and is a set of tools for mining, exploring and re-engineering knowledge as well as skills. King summarise his view of thinking, expression and problem solving in the acronym, BEAR: Bring Every Available Resource. In the BEAR strategy, creative and logical thinking exist on a continuum. You should be prepared to use any thinking strategy or tool to achieve your objective. When both creative and logical thinking are highly developed, internalized, and integrated in a philosophy of versatile thinking, the quality, quantity, breadth, depth, unexpectedness (surprise/novelty), and coherence (utility/value) of outputs in many domains would tremendously increase. Creative or versatile thinking is not a substitute for domain-specific knowledge and skills.
BEAR strategy in cognitive situations:
Logical thinking:
- Vertical (linear) exploration
- Critical; dialectical (antithetical)
- Converging
- More serious
- Routine
- Conventional
- Priority on utility/efficiency/effectiveness
- Rigid problems and (stable) solutions
Creative thinking:
- Lateral (non-linear) exploration
- Judgement-free
- Diverging
- More entertaining
- Novel
- Unconventional
- Priority on novelty/surprise
- Flexible (possible) solutions
However, in what situations does creative thinking offer advantages over logical thinking? Creative thinking is more effective and efficient than logical thinking in three types of case:
Case Type I. Situations of creative (verbal, visual and kinaesthetic) perception, expression and construction: Examples include when novel perceptions and ideas as well as unexpected (surprising) but coherent outputs are required, as in radical innovation, discovery, invention, design and scenario presentation.
Case Type II. Situations of apparently impossible tasks and problems: Examples include situations, tasks and problems that are perceived as impossible, intractable, unsolvable, extremely difficult, complex and non-routine. Such situations inherently contain problems that could be described as 'ill-defined' or 'well-defined but having solution-paths that are unknown or impossible to attain.'
Case Type III. Situations of (anticipated) unproductive logical thinking: Creative thinking may be more effective and efficient than logical thinking in producing outputs that are unique, novel and imaginative. Logical thinking often leads to converging and similar solution-paths, while creative thinking results in diverging, unexpected and unique solution-paths.
Tasks during postgraduate research include the following:
General
- Review literature
- Obtain data and information; explore and generate ideas
- Structure and present ideas, especially as diagrams
- Overcome writer's block
- Meet deadlines;achieve targets
- Find and solve apparently impossible problems of interest to yourself and 'clients'
- Organise, store, and manage materials as well as ideas
- Discuss ideas
- Solve personal problems, include the avoidance of distractions
- Write the thesis, dissertation, or chapters
Problem-definition space
- Choose a research topic or task
- Explore a topic
- Discover anomalies and important issues
- Make hypotheses and suggestions
Methods space
- Identify trends, patterns, and relationships; explore situations
- Analyse situations, data, information, problems and causes
- Design surveys and questionnaires
- Design experiments
- Theories, include the development and testing of models
Solutions space
- Gain insights, perceive connections, and envision solutions
- Explain and summarise ideas
- Synthesise and construct, include the making of scenarios
- Evaluate alternatives and select preferred alternative(s)
Implementation space
- Communicate or express ideas, findings, and results
- Present reports, including the final dissertation or thesis
These tasks can be reframed as problems. Here, a problem is defined as an obstacle, resistance, and/or missing link that inhibits the paths to a desired space or state.
Types of problems, methods, and solutions in a problem space are explained as following:
Level 1:
Problem-definition space/Type of problem:
- ill-defined
- Broad
- Weak linkages
- Fluid (flexible) boundaries
- Inventive
- Impossible
- Soft data and information
Methods space/Recommended means:
- Conceptual (lateral) exploration
- Generic creativity strategies and tools
- One-line (sentence) strategies
Solutions space/Expected outcomes:
- Conceptual solution-paths or solutions, e.g. general strategies
- Multiple (possible) approaches
Nature of outcomes:
- Broad; at policy level
- Points to 'direction' of detailed solutions
- Further exploration would be required, especially using conceptual and domain-specific methods
- Generally, least-time consuming
Level 1.2:
Problem-definition space/Type of problem:
Specific open-ended problems:
- Domain specific
- More detailed
- 'soft' to 'hard' data and information
- Explicit contradictions
Methods space/Recommended means:
- Conceptual (lateral) and detailed (vertical) exploration
- Generic as well as detailed creativity techniques and instruments
- Domain-specific knowledge and skills
- Strategies having two or more sentences
Solutions space/Expected outcomes:
- Detailed solution-paths, e.g. specific strategies (Non-)unique solutions
Nature of outcomes:
- More specific; at strategic level
- Point to 'area' of detailed solutions
- Further exploration using domain-specific methods and templates may be required
- Time-consuming
Level 2:
Problem-definition space/Type of problem:
Close-ended problems:
- Well-defined; detailed
- Specific
- Strong linkages
- Rigid boundaries
- 'routine'; conventional
- Domain-specific
- 'hard' data and information
- Specific obstacles or contradictions
Methods space/Recommended means:
- Conceptual and detailed domain-specific exploration
- Domain-specific or operational knowledge and skills
- Vertical methods and templates
- Detailed procedures or answers
Solutions space/Expected outcomes:
- Narrow solution-paths and detailed (operational) solutions e.g. action plans or detailed instruments
- Unique answers or solutions
- Product or artefact
Nature of outcomes:
- Specific; at 'shop' level
- Points to 'spot' of solutions
- Implementable results
- Generally, most time-consuming to realize
=================================================
Synthesis and evaluation of creativity:
Versatile thinking rests on four interconnected pillars:
- Template theory for versatile creativity
- Pattern and Object (PAO) thinking
- Versatile matrix of strategies for thinking
- Versatile map for problem solving
Versatile matrix:
Creativity tools are usually applied to open-ended or ill-defined problems; they focus on lateral rather than vertical exploration. Vertical, as well as lateral tools should be used to explore knowledge spaces fully and to obtain highly creative solutions. Creativity tools and techniques can be classified according to strategies for thinking. The matrix is a comprehensive classification system for creativity tools and techniques in the literature. The strategies are classes of objectives for creativity tools and techniques.
Versatile matrix of strategies for thinking is illustrated in following link:
Versatile matrix of strategies for thinking
Versatile matrix of strategies for thinking
One advantage of the versatile map is that it facilitates holistic problem-solving since the problem-definition, methods and solutions spaces are simultaneously seen and explored. Another advantage is that the versatile map can be used to manage ideas and tasks in each module of the creative web:
- Creative problem-finding
- Preparation and immersion
- Reengineering, exploration, and generation/incubation
- (Unexpected) synthesis/illumination
- Execution (experimentation) and testing
- Evaluation and verification
- Presentation, acceptance, and/or implementation; creative life space
Versatile map for problem solving (Template) is illustrated in following link:
When using the versatile map, words, sentences, paragraphs, drawings and/or tables are added to the existing branches of the template. It is called mindstorming. It is a cross between mind mapping and brainstorming. Mindstorming simply means suspending judgement. On the versatile map, mindstorming is used to populate and enrich with ideas, the problem-definition, methods and solutions-spaces. In mindstorming, there is no right or wrong answer. If an answer does not appear worthwhile, it can be categorised under the branch of (intermediate) idea prompters. Such intermediate prompters might later generate strategies or solutions that can then be included in the solutions-space. When mindstorming, I usually play the role of a 'paoist', whose maxim is: 'every object in the universe is connected.' The ideas on a versatile map can therefore be generated intuitively, randomly and systematically with the view that everything will eventually be connected. Based on ideas in the template theory and PAO Thinking, a checklist of generic questions that can be used with the versatile map. This is the versatile checklist. In the questions, the word 'object' may refer to a situation, task, activity, problem, opportunity, objective or project. For best use of the versatile map, I recommend that you convert given issues to 'impossible' situations, tasks and problems. The versatile checklist could facilitate the generation of 'impossible' situations, tasks and problems. An alternative approach for generating an 'impossible' or 'inventive' problem can be found in the ideal solution method in the Theory of Inventive Problem Solving, which has the Russian acronym 'TRIZ'. In my experience, to be creative means to think from the impossible through the improbable to the probable.
Versatile checklist:
- (a) Do you think a template exists for ['object']? (b) Which types of templates are similar to ['object'] in core, peripheral/parallel and remote domains?
- How many and different ways of ['object'] exist in core, peripheral/parallel and remote domains?
- To which classes of structural templates does ['object'] belong: stone-heap, chain, tree and/or web templates?
- (a) What are the rules for generating ['object']? (b) Which concepts, theories, hypotheses, laws, formulae, diagrams, charts and/or models could be related to ['object']?
- What basic rules for ['object'] could be violated or broken?
- Which rules cannot be violated or broken?
- In how many and different ways could ['object'] be:(a) observed, recognized, discovered, obtained and explored? (b) deconstructed and analysed? (c) adapted and modified? (d) combined, synthesised, 'sculpted' or constructed? (e) envisioned and transformed? (f) presented and/or implemented?
- What are (creaLogical) alphabets, vocabularies, and patterns of ['object'] in core, peripheral/parallel, and remote domains?
- Which further strategies, tools and techniques from the versatile matrix could be used?
Software and websites for creativity:
Creativity, ideas management and problem solving comprise such a wide field that almost any software could be regarded as suitable for this field.
Generic software for creativity and ideas management includes:
- Word processing software
- Outlining and graphic organisers
- Presentation (drawing) software
- Database software
- Memory management software
Generic software for problem solving would include project management software. Software packages that have been developed with a view to creativity, ideas management, and problem solving and also some websites on creativity, ideas management and problem solving are listed in the following link:
Survey research:
Survey research and questionnaire research are not the same thing. Although questionnaires are frequently used in surveys, there is no necessary link between surveys and questionnaires. There are two distinguishing characteristics of surveys: the form of data and the method of data analysis. Neither of these features requires questionnaire-based data collection: in-depth interviews, observation, content analysis and so forth can also be used in survey research.
Response alternatives:
For questions where the response categories can be ranked from high to low in some respect, the link below provide some excellent example of well evaluated sets of well evaluated sets of response alternatives.
Sources of questions:
Rather than unnecessarily developing new questions it makes sense to use well developed and tested questions that have been used in reputable surveys. The sites below are excellent sources of questions and provide ideas about question format and structure:
The Question Bank: Question Bank
General social survey: General social survey
MZES data base: MZES data base
In addition, many of the online survey design packages and questionnaire design packages include libraries of questions. There are also a number of excellent handbooks of collections of questions on a range of topics. References to these collections are available from the website: Social Research.
Software for producing questionnaires:
The task of questionnaire layouts has been made easier by the power of word processors. Specialised software developed for electronic surveys has made the process even simpler. Some of these packages are listed as links below:
Web surveys can be designed and administered online. There are a number of sites where this acn be done at no charge. Examples are given in links below:
Web-based questionnaires can be designed easily with the aid of special software. Further information about the range of software with which to conduct computer-assisted surveys can be found at: Meaning.
Ethics:
The full codes of ethics of a number of different professional organisations provide a much fuller outline of the types of ethical issues that must be taken into account in social science research, and survey research in particular. Codes of ethics for survey and social science research are listed in links below:
Sampling general populations:
Populations that might be of interest to particular studies can broadly be classified as either general or special. There is no precise definition of the difference between the two but a general population can be thought of as one that includes a large proportion of the total population in the geographical area of interest to the study, perhaps a quarter or more. This would include 'all adults', 'all households with a telephone', 'all married women'. A special population would be a smaller proportion of the total. It may be called a minority population. The two lists most commonly used as sampling frames of addresses in the UK are the Postcode Address File (PAF) and the Electrical Register (ER).
SARs: Samples of Anonymised Records (SARs) are also available following the census. This resource is useful for micro-level analysis within the constraints of the census principle of confidentiality. The most up to date information about SARs is available from the Census and Microdata Unit at the University of Manchester, which is responsible for providing this information. Its website includes useful contacts, news and information on current research using this source.
Population Estimates:
Because the Census is held at only ten-year intervals, researchers have to use estimates of population during the intercensal period. Estimates for England and Wales are available on CD-ROM/disk, via the Internet at Population Estimates (on the StatBase database) and from the ONS.
Population projections:
The official population projections in the UK, at national or sub-national level, are trend based. They represent scenarios without sudden or substantial shifts in the assumptions of mortality, fertility and net migration. The national projections are available as a reference volume, CD-ROM, via the Internet on the GAD website and by request. Long term sub-national projections for England are published as a reference volume, on CD-ROM, on the Internet via StatBase (the ONS database for national statistics, GAD wbsite) and by requests.
International comparisons:
National trends can be compared between countries. The following sources may help:
- United Nations Population Projections (Paper): Total population (by age group and gender), TFR, CDR, Net migration, Time series (50 years backwards and forwards)
- Eurostat NewCronos Database (Eurostat). TFR = Total Fertility Rate, IMR = Infant Mortality Rate, CBR = Crude Birth Rate
- Council of Europe - Recent Demographic Developments in Europe (Paper): Total population (by age and gender), population pyramids, total births, abortions, TFR, net reproduction rate, mean age at childbirth, mean age at first marriage, live births by order, international migration, natural increase, life expectancy, longitudinal data
- World Factbook - The World Factbook: Total population, age structure, growth rates, net migration, life expectancy, IMR, ethnicity, TFR
- World Population Datasheets - World Population Datasheets: Total population, density, birth rates, population policy, CBR, CDR, TFR, IMR, %Married, %Over 65, %Urban, Projections 2025 and 2050
Websites for software and hardware:
- Adept Scientific: A commercial site for information on HpVee, data collection software and hardware and analysis software.
- Lab view: A commercial site for information on Labview, data collection software and hardware and analysis software.
- MATLAB: A commercial site for information on Matlab, data collection software and hardware and analysis.
- Visual C++: A commercial site for information on C++.
- Amplicon: A commercial site for information instrumentation for measurement.
- Analog: A commercial site for information on instrumentation and sensors for measurement.
- Digital signal processing: An online resource for digital signal processing
- Engineers4engineers: A resource site for information on units, measurement and instrumentation.
Websites on safety and standards:
- European Safety Federation: Website for the European Safety Federation
- MDSS: Website for the Medical Device Safety
- IECEE: International Electrotechnical Commission (IEC) System for Conformity Testing and Certification of Electrical Equipment.
===============================================
Elementary statistics:
Statistics is an unusual word because it has two meanings. It refers both to numerical data describing some phenomenon, and to the science of collecting and analysing those data. In contrast, the other meaning, sometimes expanded to statistical science, describes methods that can be applied in any domain.
Statistics is an unusual word because it has two meanings. It refers both to numerical data describing some phenomenon, and to the science of collecting and analysing those data. In contrast, the other meaning, sometimes expanded to statistical science, describes methods that can be applied in any domain.
Scales:
Data come in several forms and it is useful to distinguish between them. First we can distinguish between numerical and non-numerical measurements. Examples of the former are a person's age or weight and the size of a family. Examples of the latter are the position of a mark on scale indicating one's extent of agreement with some statement, and scores of mild, moderate or severe on a pain scale.
Simple distributions:
Some distributions are particularly important in statistics, either because they are very common or because they have attractive mathematical properties.
Bernoulli distribution:
The Bernoulli distribution arises when there are two possible outcomes, with probabilities p and (1 - p). A classic example is the toss of a coin, and if the coin is 'fair' both p and 1 - p will equal 1/2.
Binomial distribution:
An extension of this is the binomial distribution. This arises as the sum of a number, n, Bernoulli outcomes. For example, instead of being interested in whether a single toss of a coin will come up heads or tails, we might be interested in the proportion of 100 tosses that come up heads: What is the probability that 50 will produce heads? Or 49? And so on.
Poisson distribution:
The binomial distribution is one possible model for counts. Another one is the Poisson distribution. Suppose, again, that n independent events are considered, with each of them having two possible outcomes (A and B), with the same probability that A will occur each time. Then, if n is large and the probability of A is small, the total number of times A occurs can be well approximated by a Poisson distribution.
Normal distribution:
So far we have only considered discrete distributions in which the outcome can take only one of a discrete set of values (such as 0, 1, 2,...). In other situations, any value can occur (perhaps from a certain range). There are called continuous distributions. The most important example of these is the normal or Gaussian distribution. The normal distribution is often a sufficiently accurate approximation to empirical distributions that occur in real life, and this is one reason for its importance. Another is that, when large samples are involved, the normal distribution is often a good approximation to the distributions of statistics calculated from data.
Estimating parameters:
The statistics calculated from samples can often be regarded as estimates of parameters of the populations from which the samples were drawn.
Testing Hypotheses:
These above ideas can be used to test theories in the following way.
Further statistical methods:
Here we move on to describe some more advanced techniques.
Regression analysis:
Regression analysis is a statistical model building technique. It relates a single response or dependent variable to one or more predictors or independent variables. The model, a mathematical equation, can be used as a summary of the relationship between the response and the predictors and it can also be used to predict the value of the response given values of the predictors. the model is formulated to predict the value of a response variable had the form of a simple weighted sum of the predictive variables.
Logistic regression:
In statistics, logistic regression (sometimes called the logistic model or logit model) is used for prediction of the probability of occurrence of an event by fitting data to a logistic curve. It is a generalized linear model used for binomial regression. Like many forms of regression analysis, it makes use of several predictor variables that may be either numerical or categorical. For example, the probability that a person has a heart attack within a specified time period might be predicted from knowledge of the person's age, sex and body mass index. Logistic regression is used extensively in the medical and social sciences as well as marketing applications such as prediction of a customer's propensity to purchase a product or cease a subscription.
Discriminant analysis:
Prognosis means determining the likely future outcome, and is important for people who have suffered a head injury. It will depend on many factors, including age, response to stimulation, and change in neurological function over the first 24 hours after the injury. We would like to predict the future outcome, say recovery or not, on the basis of some of these variables. Again, we shall build a model to do this, and again we will base our model on a sample of people. In particular, we will have a sample of people who have known predictor variable values (such as age) and whom we have followed up so we know their outcome. There are many ways in which the distributions might be modelled, but the most common assumes a particular class of forms for the distributions. This model leads to what is known as linear discriminant analysis. It is called this because it leads to a predictive model that has the form of a weighted sum (a linear combination) of the predictor variables.
Analysis of variance:
All the techniques have the same underlying form: a weighted sum of the predictor variables. Another very common type of technique, which also has this underlying structure, although this is often concealed in elementary descriptions, is analysis of variance. This is aimed at describing the differences between groups of objects, so it is closely related to the discriminant analysis of the previous section.
Other methods:
So far, all the techniques are predictive in the sense that they seek to determine the likely value of one variable given an object with known values of the other variables. Not all questions are like this, however. Another whole class of models is concerned with describing the relationships between variables and objects when no variable can be separated out as a response. Principal components analysis is one such. This technique allows us to determine which combinations of variables explain the most differences between the objects in the sample.
Time Series:
All of the methods outlined above involve multiple measurements on each object. Typically, there will only be a few of such measurements and there will be several or many objects. Time series are ubiquitous forms of data. Examples are: stock closing prices at the end of each trading day; temperature at a particular location measured at midday each day; daily rainfall; and an individual's body weight measured at 8:00 am each day.
Other techniques:
Statistics is a vast domain, with methodological research going on all the time; new methods for new problems and improved methods for old problems are being developed. Recently, simulated in part by the possibilities presented by the growth in computer power, new classes of flexible multivariate techniques have attracted a great deal of interest. These include:
- Neural networks
- Projection pursuit regression
- Radial basis function models
- Multivariate adaptive regression aplines
Computer support for data analysis:
In the 21st century, no postgraduate student will complete the analysis of his or her research data without turning to a computer. The computer has moved from being a widely available and convenient tool to being both ubiquitous and essential.
Confirmatory and Exploratory analysis:
Most research programmes will profit from exploratory as well as confirmatory data analyses. Briefly, confirmatory analyses are those that answer the questions that drove your research. Exploratory analyses provide clues about how better to design your next study. Exploratory analyses have been described as unplanned or dependent upon the data, once collected and examined.
Computing resources:
There are statistical procedure packages and statistical languages. The former typically offer a limited number of statistical procedures, either standard or specialised, in an environment easily traversed by the user. The latter provide an extensive set of tools that a knowledgeable user could deploy to implement an extremely rich range of procedures.
The websites of the Royal Statistical Society (RSS) and the American Statistical Association (AMSTAT) provide links to the websites of package publishers. An impressive list of statistical packages with links to their web sites is maintained on the website of the publishers of the statistical package Stata (Stata).
Standard statistical packages:
- Genstat (from general statistics) is the product of researchers at the Rothamsted Experimental Station and is distributed by the Numerical Algorithms Group (NAG).
- SPSS (statistical package for the social sciences, SPSS)
- Minitab (Minitab)
- S-plus, the augmentation of the statistical language S distributed by Insightful (formerly MathSoft, S-plus)
Specialised statistical packages:
- The Microsoft spreadsheet program, Excel, is increasingly popular as a statistical analysis platform, particularly in business schools. The statistical capabilities of Excel cover the range of procedures associated with an introductory course. For more advanced analyses, third-party add-ons may be available. For example, XLSTST (XLSTAT) provide a set of 25 tools to facilitate data management and to implement a variety of non-parametric tests, analyses of variance, and the more popular multivariate analyses.
- BUGS (Bayesian inference Using Gibbs Sampling, BUGS) provides the first platform for the study of Bayesian conditional independence models that is relatively easy to use.
- Resampling Stat (Resample) provides a platform for quickly developing applications of permutation (randomisation) tests and bootstrap inference.
- StatXact (StatXact) provides implementations of the more common permutation tests.
- BLOSSOM (BLOSSOM) is another package of specialised permutation tests. The package features permutation applications to multiple-response data and linear models. BLOSSOM is available at no cost from the US Geological Survey.
In addition, there is an organisation called ASSUME (The Association of Statistics Specialists Using Microsoft Excel). Look at its website which has links to many useful sources of information, articles and reviews: ASSUME.
==============================================
=============================================================================================
Mathematical models:
We all use simple mathematical models in our everyday lives. The most common example is arithmetic, which we use for calculating monetary transactions, amongst other things. The numbers represent the physical currency, or perhaps electronic credits, but the example is exceptional because no approximation need be involved. A more typical example is provided by the answer to the question: How much wallpaper do you need to redecorate your living room? You could probably model the area to be papered by rectangles, work out their areas, add them up, and make some allowance for matching edges of rolls and so on. Your rectangles will not correspond to the three dimensional shape of the room, and will not include details such as light switches, but the simple model is quite suitable for calculating the number of rolls of wallpaper you should buy. However, it would not be adequate for working out suitable sizes and positions for central heating radiators. The mathematical models needed for research programmes are more complicated but the sequence of:
We all use simple mathematical models in our everyday lives. The most common example is arithmetic, which we use for calculating monetary transactions, amongst other things. The numbers represent the physical currency, or perhaps electronic credits, but the example is exceptional because no approximation need be involved. A more typical example is provided by the answer to the question: How much wallpaper do you need to redecorate your living room? You could probably model the area to be papered by rectangles, work out their areas, add them up, and make some allowance for matching edges of rolls and so on. Your rectangles will not correspond to the three dimensional shape of the room, and will not include details such as light switches, but the simple model is quite suitable for calculating the number of rolls of wallpaper you should buy. However, it would not be adequate for working out suitable sizes and positions for central heating radiators. The mathematical models needed for research programmes are more complicated but the sequence of:
- Stating the problem
- Formulating a relevant mathematical model
- Obtaining the solution
- Interpreting the solution in the practical context
We should also monitor the accuracy of our solution, and then refine our model to improve subsequent predictions.
Learning mathematics:
If you are an engineer, mathematician or physicist you will already have had considerable experience of mathematical modelling, and have the necessary concepts to teach yourself new techniques relatively easy. Even so, finding a relevant course at either postgraduate or undergraduate level, should help you learn new methods more quickly. Kreyszig's book, which is now in its 8th edition is a comprehensive general reference work. The book called Statistics in Engineering, emphasises the modelling aspects of the subject.
Mathematical software:
Modern computers have greatly reduced the need to learn the detail of mathematical methods and it will often suffice to understand the general principles. For example, the details of efficient algorithms for calculating matrix eigenvalues can be ignored by non-specialists now that software such as Matlab can provide answers at the touch of a button. However, Matlab does not substitute for an understanding of the concept of eigenvalues and eigenvectors.
Computer algebra, such as Maple, is another valuable aid but a sound knowledge of algebra is needed to make good use of it. Mathematica is another powerful software system for numerical and symbolic computation and scientific graphics. Fortran is still commonly used for research work in engineering, despite the increasing use of C++, and the NAG subroutines are invaluable. Another source of algorithms is Numerical Recipes in Fortran. The following website is a useful resource: Fortran.
The high-level programming language J (Iverson Software Inc) also contains powerful mathematical functions (phrases). Spreadsheet software, such as Excel and Lotus, can be used for simple, but effective, mathematical modelling, and has the advantage of being widely available.
Promoting applications of mathematics:
The society for Industrial and Applied Mathematics (SIAM) was inaugurated in Philadelphia in 1952. The website is at: SIAM. The Institute of Mathematics and its Applications was founded in England in 1964, with similar objectives. Their website is at: IMA. Its bulletin, Mathematics Today, contains general interest articles that describe novel applications. They are certainly not restricted to engineering and science.
Advanced courses:
The London Mathematical Society (LMS), the Isaac Newton Institute for Mathematical Sciences at the University of Cambridge, and the International Centre for Mathematical Science (ICMS) in Edinburgh all offer short courses from time to time. For example, the LMS, together with EPSRC, is now advertising a short course in Wave Motion, with an emphasis on non-linear models. The Society's website is at: Mathematics short course activities. This is aimed at research students working in the area of mathematical modelling, rather than exclusively research mathematicians. The ICMS is offering an instructional conference on non-linear partial differential equations, which is aimed at postgraduate students at the beginning of their research programmes, and EPSRC is providing financial support for the majority of participants. The ICMS website is at: ICMS.
Types of mathematical models:
Deterministic models:
- Linear dynamics: Vibration of rotors - Population dynamics and epidemiological applications - Economic models: linear differential and difference equations.
- Non-linear dynamics: Wind induced motion of suspension bridges - Spatio-temporal patterning in biology - Predicting typhoons
- Control theory: Autopilots for aircraft - Biological control - Robotics
- Catastrophe theory: Phase transitions - Non-linear dynamics in nursing care - Economic models: some non-linear
- Partial differential equations: Fluid flow - Modelling methane fluxes in wetlands - Production functions such as Cobb-Douglas
Stochastic models:
- Markov chains: Dam storage - Speech recognition - Brand loyalty
- Point processes: Rainfall modelling - Birth and death processes - Monitoring effects of road safety measures
- Time series: Flood prediction - Epidemiology - Social trends
- Spatial processes: Distribution of ore in mining - Diffusion across membranes - Image processing of satellite pictures
- Signal processing: Instrumentation - Monitoring muscle function in babies - Verbal communication
- Simulation: Performance of computer systems - Epidemics - Inventory management and production planning
Optimization:
- Calculus: Calculating flight paths of space exploration vehicles - Logistic growth model - Maximization of a consumer's utility
- Descent algorithms: Airfoil geometry - Chemical reaction rates - Debt dynamics
- Linear programming: Gas transmission - Emission of greenhouse gases from agriculture - Blending problems
- Dynamic programming: Control algorithms - DNA sequence analysis - Allocation of water resources
- Critical path analysis: Programming techniques for microprocessors - Nuclear material safeguards - Project planning
- Genetic algorithms and simulated annealing: Optical telecommunications networks - Human posture recognition - Cooperative trade
- Artificial neural nets: Engineering applications - Processing EEC signals - Benefits transfer
Deterministic models:
A good mathematical model will be as simple as possible, while including the essential features for our application. The success of a model is usually judged in terms of the accuracy of predictions made using it, but we also try to capture something, at least, of the way we imagine the world to be. The following examples of mathematical models are chosen to illustrate the ranges of techniques and areas of application.
Discrete and continuous variables:
Although we perceive time and space to be continuous, and variables that vary over space or time (often referred to as field or state variables) as discrete or continuous, we do not have to model variables in the same way. For example: the size of a population of animals is an integer number, but if it is large it can be treated as a continuous variable; time is continuous but provided we sample sufficiently quickly we can model it as a sequence of discrete steps. The distinction is useful because somewhat different mathematical techniques are used for discrete and continuous modelling.
Linear and Non-linear:
Linear and Non-linear:
The distinction between linear and non-linear dynamic systems is fundamental. A system is linear if the response to a sum of input signals is the sum of the responses that would result from the individual input signals. If the input signal is a sine wave, the steady state system response will be a sine wave at the same frequency, but with a change in phase. If the amplitude of the input signal is doubled, the amplitude of the response doubles. In algebraic terms, a differential equation, which models a system, is linear if the sum of any two solutions is also a solution. The theory of linear systems is thoroughly worked out, and has been applied successfully to a wide range of practical problems. Therefore, an attractive approach to modelling non-linear systems is to linearise, locally.
Vibration control:
Vibration control:
Although the simplest way to reduce unwanted vibration is to add dampers to the system, Victorian engineers also designed clever mechanical devices, known as vibration absorbers. Both dampers and vibration absorbers, which are made up from small auxiliary masses and springs, are passive devices, in so much as they do not require any auxiliary sensors, actuators or power suppliers.
Finite Element Modelling:
Finite Element Modelling:
The finite element method is a computational method that is used routinely for the analysis of: stress; vibration; heat conduction; fluid flow; electrostatics; and acoustics problems.
Finite Difference Method:
Finite Difference Method:
The finite element method applies the exact equations for the idealised elements to a model of the system, made up from these elements. The finite difference method approximates the differential equations describing the original system, by replacing derivatives with ratios of small, rather than infinitesimally small, changes in the variables. The two models are conceptually different, and although many problems can be solved with either method, the solutions will not, in general, be identical.
Control Theory:
Control Theory:
There are applications of control theory in almost all disciplines. Novel applications of the H-infinity criterion, minimising the worst case, are still potential research projects. The method is supported by the Matlab Robust-Control toolbox.
Catastrophe theory (Singularity theory):
Catastrophe theory (Singularity theory):
Rene Thom's famous treatise on catastrophe theory, which is now considered part of singularity theory, Stabilite Structurelle et Morphogenese, published in 1972, was the culmination of work, by him and others, over the preceding ten years. He suggested using the topological theory of dynamical systems, originated by Henri Poincare, to model discontinuous changes in natural phenomena, with a special emphasis on biological systems.
Other mathematical methods:
Other mathematical methods:
There are many other mathematical methods for modelling systems. Of particular importance are complex variable which can be used for designing aerofoils, vector analysis, which is central to hydrodynamics, and Fourier analysis and integral transforms which can be used for control system design, fracture mechanics, and the solution of certain partial differential equations amongst many other applications.
==============================================
==============================================
Stochastic models and simulation:
The concept of random processes can be traced back at least as far as 2 BC. Although stochastic is now synonymous with random, but its original meaning in the Greek language is 'skillful in aiming'. This is appropriate because we aim to use probability theory and the theory of stochastic processes to model the occurrences of chance events and to provide the best possible predictions despite the uncertainty. We can also quantify the uncertainty by using, for example, 95% confidence intervals for unknown parameters and 95% prediction intervals for individual outcomes. Apart from short-term predictions, stochastic models are also used to generate many possible long-term scenarios for evaluation of policies that might relate to portfolio management or the construction of flood defences.
The concept of random processes can be traced back at least as far as 2 BC. Although stochastic is now synonymous with random, but its original meaning in the Greek language is 'skillful in aiming'. This is appropriate because we aim to use probability theory and the theory of stochastic processes to model the occurrences of chance events and to provide the best possible predictions despite the uncertainty. We can also quantify the uncertainty by using, for example, 95% confidence intervals for unknown parameters and 95% prediction intervals for individual outcomes. Apart from short-term predictions, stochastic models are also used to generate many possible long-term scenarios for evaluation of policies that might relate to portfolio management or the construction of flood defences.
A stochastic model starts with a deterministic model. It is often, but not always, a simple empirical relationship, and accounts for deviations between the model and data by postulating random errors. These errors encompass: inherent variation in the population being modelled; modelling error; and measurement error. A typical research project will involve: thinking of reasonable models for the situation; fitting these models to existing data and choosing one or two that seem the best, from an empirical point of view; simulating future scenarios; and monitoring the success of predictions. The random errors are modelled by a probability distribution. Generation of random numbers from a given probability distribution is an part of any (stochastic) simulation.
Although we perceive time and space to be continuous, and variables that vary over space or time (field or state variables) as discrete or continuous, we do not have to model variables in the same way. For example, the Markov chain model for dam storage treats both volume of water and time as discrete. That is: volume of water is measured in multiples of, typically, 20ths of the total capacity of the dam; and time jumps from the end of one dry season to the end of the next one. The stochastic calculus deals with continuous variables defined over continuous time, and financial applications are a popular research topic.
You may find the following summary of some of the main techniques of stochastic modelling, by category, helpful.
- Field variable (Discrete): Space/Time: Discrete (Markov Chain) - Space/Time: Continuous (Point processes)
- Field variable (Continuous): Space/Time: Discrete (Time series models - Signal processing of digitised signals - Image processing - Kriging) - Space/Time: Continuous (Field models - Spectral and wavelet theory for analogue signals - Ito Calculus, Stratonovich Calculus)
Applications of Markov Chains:
The Markov property is that the future depends on the present, but not the past, given the present state. In a Markov chain, the process can be in any one of a discrete set of states after each time step. In brand loyalty models, the state is the brand and the time steps are times between purchases. More specialist topics include hidden state Markov chains, in which the state is not observed, and which have been used for such diverse applications as the modelling of genome structure, speech recognition and rainfall modelling. Markov chain Monte Carlo (MCMC) methods, such as the Gibbs sampler, are now commonly used for stochastic modelling, such as the joint distribution of floods and their volumes. The Hastings-Metropolis is more general.
Applications of Point Processes and Simulation:
In a point process, events, which result in a change of state of the process, occur at instants of time. There are many applications, including machine breakdowns and repairs, and queuing situations. In a typical queuing model, the state is the number of persons in the system and an event is the arrival of another person or the completion of a service. It is possible to build up more complex models by superimposing point process models and, for example, rectangular pulses for rainfall at a point. Simulation of point processes is known as discrete event simulation.
Applications of Time series and Simulation:
The risk of flooding in the short term has an important bearing on some civil engineering decisions. For example, contractors working on a dam face, from barges or with floating cranes, will benefit from accurate estimates of flood risk. Water engineers responsible for reservoir operation, who need to balance the requirements of flood control, provision of domestic and industrial water supply, public amenity and effluent dilution, will also benefit from up-to-date estimates of the risk of occurrence of high flows. In such cases, the risk of flooding will be influenced by prevailing catchment conditions and weather forecasts, as well as the average seasonal variation. Insurance companies might also have an interest in estimating short-term flood risks. The first step in estimating flood risk is to generate a sequence of wet and dry days for the required period using the Markov chain. Next, a random sample of daily rainfalls is generated for the wet days. The autoregressive model can then be used to generate a baseflow sequence. Events with a two-day rainfall total exceeding the rainfall threshold of 14 mm are identified, so at this stage the number of rainfall events is known. Now suppose that the probability of exceeding some critical flow is required. For each rainfall event, the probability of not exceeding this critical flow can be calculated from the Weibull distribution, and hence the risk of flooding during any required period can be calculated.
A computer program was written to calculate the flood risks for seven and 30 days ahead, as a function of current base flow, using the stochastic simulation.
Signal processing:
Signal processing covers a vast range of applications. A medical example involves recording the electromyograph (EMG) response to a pseudo-random sequence of taps applied to the biceps muscle. The biceps muscle is adequately modelled as a single degree of freedom linear system and it follows, from the theory of spectral analysis, that the autocovariance function of the EMG response is an estimate of the impulse response of the biceps. The importance of the method is that it provides clinicians with a quick painless test to ascertain whether babies are suffering from cerebral palsy. Early detection increases the efficacy of treatment for the condition. A larger scale example involved the estimation of the frequency response functions for the three translational and three rotational motions of a large motorboat, from data collected during sea trials. Hence, the risks of capsize in extreme sea states, and the accelerations experienced by crew at various stations on board, can be estimated. The wavelet transform is an important development in signal analysis.
A random field model for rainfall:
The distribution of rainfall, over time and space, is essential information for designers of water resource projects ranging from flood protection to irrigation schemes. Ideally, and provided there were no long-term climate changes, statistics could be calculated from long records over an extensive network of rain gauges. In practice, rain gauge networks are often sparse or non-existent and, even in countries with good coverage, records for periods exceeding 50 years are relatively uncommon. Furthermore, records usually consist of daily rainfall totals, and for some purposes, such as assessment of the hydraulic performance and pollution impact of sewers, finer resolution, down to five-minute rainfall totals, is needed. For some purposes, it may be possible to progress with rainfall at a single site. Other applications need rainfall at several sites, and projects that are more ambitious require a rainfall field model. The development of rainfall field models, and their calibration from radar data, is an active research topic. The rainfall model has been coupled with a deterministic rainfall-runoff model of the River Brue catchment in the South West of England and could be used for flood warning and the design of flood protection schemes.
Other random field models:
Another particularly active research area is the analysis of digital images from space probes, electron microscopes, brain scanners, and so on. Noise and blur have to be removed, and there is a wide variety of statistical techniques that can be used. The images are usually considered in terms of discrete grey scale values, defined over a grid of picture elements, typically 1024 x 1024. The grey scale is sometimes treated as continuous. Some good introductory papers can be found in the Internet directory (MCMC). Kriging is a method for spatial interpolation between a few point values. It was developed in the mining industry, and this is reflected in some of the terms used, such as the 'nugget effect'. Details can be found in texts on geostatistics.
===============================================
The aim of optimisation is to choose values of variables so that some function of those variables (the objective function) takes its least value. Formulating the problem in terms of finding the least value of an objective function is not a restriction because finding the greatest value of a function is equivalent to finding the least value of its negative. Optimisation problems are widespread and there are many techniques for solving them. These range from the calculus, such as for the linear optimal control problem, to stochastic trial-and-error methods such as genetic algorithms. Although most of these techniques were originally applied to deterministic problems, they have been adapted to deal with optimisation problems that include stochastic terms in their definitions. The subject of operational (or operation) research (OR) is mainly concerned with the application of optimisation methods in industry and commerce. The OR Society in the UK has a website at ORSOC and there are many journals devoted to the discipline.
Calculus:
If the objective function is a continuous and differentiable function of the variables, stationary points can be found, in principle, by setting partial derivatives equal to zero. However, the resulting equations may have to be solved numerically. Constraints can be handled by the technique of LaGrange multipliers.
An important application is the general solution of the linear optimal control problem, and its stochastic variant, in which the system is subject to disturbance noise and the observations are subject to measurement noise. An associated and mathematically equivalent problem is constructing an optimal observer, which is usually referred to as a Kalman filter.
Linear programming:
Many optimisation problems are highly structured and there are very efficient methods for their solutions. If you hope to solve optimisation problems with a large number of variables it is essential to use the most efficient algorithm available, and, since the size of the problem usually increases exponentially with the number of variables, this situation will not change with the introduction of more powerful computers. There are particularly efficient algorithms for solving linear programming problems.
The linear programming (LP) problem is to maximise, or minimise, some linear function of variables subject to linear inequalities. Typical examples include blending problems and transportation problems. The inequalities define a feasible region that is bounded by hyper-planes. The optimum correspond to one of the vertices of this region, and the search can therefore be restricted to vertices. The simplex method for solution of the LP problem moves between vertices so that the value of the objective function improves or, at worst, stays the same. There are many topics written on linear programming, which includes advice about microcomputer software. They recommend LINDO from Lindo Systems, who have a website at LINDO.
Standard LINDO will handle up to 200 variables and the extended version will go up to 100,000 variables. The method of projective transforms for such large problems is also another method. The Matlab optimisation toolbox also includes algorithms for the efficient solution of linear programming problems. Stochastic variants of LP are usually discussed under the title of chance constrained programming.
Mathematical programming:
It includes the following techniques under the general heading of mathematical programming:
Dynamic programming and SDP:
It includes the following techniques under the general heading of mathematical programming:
- Linear programming (LP)
- Non-linear programming (NLP)
- Integer programming (IP)
Dynamic programming and SDP:
The method can be used for a variety of problems, including scheduling of work in factories and optimal control. A simple example is that of a traveller who intends to travel from a city A to a city Z in several stages. At each step there are many possible intermediate destinations, and the costs of travel for all possible routes between stages are known. The most efficient way to minimise the total cost is to work backwards from Z. Critical path analysis is a related problem, and programme evaluation and review technique (PERT) is a stochastic version of this. Fair allocation of water from a network of reservoirs to households, to farms for irrigation, and to industry, is a vital issue in countries with arid climates, such as South Africa. A typical operating policy will specify the amounts of water to be released to the various recipients each month. The decisions that make up the policy depend on the amount of water in the reservoir, the time of the year, and the expected future inflows into the reservoir. These inflows are unknown and are described by a probability distribution. The determination of a policy that will optimise the benefits to the community is an example of a stochastic dynamic programming (SDP) problem. Simple SDP problems can be solved by using decision trees.
Descent algorithms:
Descent algorithms:
Descent algorithms find a minimum value of a continuous function of several variables by calculating approximations to local derivatives and then proceeding in the direction of steepest descent until a minimum is found along the curve in this direction. The process is then repeated. These algorithms are used as part of a modelling procedure, rather than being a model in themselves. One valuable application is estimating parameters of models, which are non-linear in those parameters, from experimental data (such as the useful non-linear regression routine in SPSS) There are many ingenious modifications of descent algorithms, and good software that implements them is readily available (such as NAG subroutines and the Matlab Optimisation Toolbox. If you do not wish to rely on libraries of subroutines you can use algorithms from Numerical Recipes. The simplex method, and modified simplex method of Nelder and Mead, is a somewhat simpler and less efficient, but generally effective, means of finding a minimum. If the objective function depends on two variables only, the simplex method (which is quite different from the simplex method for LP) is easily described.
GAs and Simulated Annealing:
GAs and Simulated Annealing:
Simulated annealing is a descent algorithm with small probabilities of moving in the direction of steepest ascent rather than descent. These probabilities become smaller as the value of the objective function decreases. Genetic algorithms (GA) code the values of the variables as binary numbers and link them to form a string of binary digits (referred to as a choromosome). Strings are selected from an initial set with probabilities proportional to their 'fitness', which will increase as the objective function becomes lower in a minimisation problem, and are combined by crossing over sections of the strings.
Artificial Neural Nets:
Artificial Neural Nets:
Artificial Neural Nets (ANNs) are empirical relationships established between input variables and output variables from a data set referred to as a training set. The objective is to predict the output for cases for which only values of the input variables are known. Formally, ANN include multiple regression models as special cases, but typical ANN applications have little physical interpretation. They seem to work well if the training set is very large and covers the range of likely values for the input variables. They should not be used for extrapolation outside this range. There is a plethora of applications, including the assessment of credit worthiness of applicants for credit cards. Fitting ANN is a large non-linear least squares problem and many ingenious methods have been proposed. Matlab has an ANN toolbox.
===================================================
===================================================
Ref: Research methods for postgraduates by Tony Greenfiled,
Thank you so much for this nice information. Hope so many people will get aware of this and useful as well. And please keep update like this.
ReplyDeleteBig Data Services
Data Lake Services
Advanced Analytics Solutions
Full Stack Development Services
I really liked your blog post.Much thanks again. Awesome.
ReplyDeletedata analytics courses in ameerpet
data scientist course in hyderabad
Thanks.Can you tell me how to apply for CE and FCC certificate
ReplyDeleteWow, amazing block structure! How long
ReplyDeleteHave you written a blog before? Working on a blog seems easy.
The overview of your website is pretty good, not to mention what it does.
In the content!
Freemake Video Downloader Crack
MorphVox Pro Crack
WinZip Pro Crack
Drawboard PDF Crack
XLStat Crack