Direct Sum Decompositions of Torsion-Free Finite Rank Groups With plenty of new material not found in other books Direct Sum Decompositions of Torsion-Free Finite Rank Groups explores advanced topics in direct sum decompositions of abelian groups and their consequences. The book illustrates a new way of studying these groups while still honoring the rich history of unique direct sum decompositions of groups. Offering a unified approach to theoretic concepts this reference covers isomorphism endomorphism refinement the Baer splitting property Gabriel filters and endomorphism modules. It shows how to effectively study a group G by considering finitely generated projective right End(G)-modules the left End(G)-module G and the ring E(G) = End(G)/N(End(G)). For instance one of the naturally occurring properties considered is when E(G) is a commutative ring. Modern algebraic number theory provides results concerning the isomorphism of locally isomorphic rtffr groups finitely faithful S-groups that are J-groups and each rtffr L-group that is a J-group. The book concludes with useful appendices that contain background material and numerous examples. GBP 59.99 1
Performance Analysis of Queuing and Computer Networks Performance Analysis of Queuing and Computer Networks develops simple models and analytical methods from first principles to evaluate performance metrics of various configurations of computer systems and networks. It presents many concepts and results of probability theory and stochastic processes. After an introduction to queues in computer networks this self-contained book covers important random variables such as Pareto and Poisson that constitute models for arrival and service disciplines. It then deals with the equilibrium M/M/1/∞queue which is the simplest queue that is amenable for analysis. Subsequent chapters explore applications of continuous time state-dependent single Markovian queues the M/G/1 system and discrete time queues in computer networks. The author then proceeds to study networks of queues with exponential servers and Poisson external arrivals as well as the G/M/1 queue and Pareto interarrival times in a G/M/1 queue. The last two chapters analyze bursty self-similar traffic and fluid flow models and their effects on queues. GBP 59.99 1
Field Guide to Compelling Analytics Field Guide to Compelling Analytics is written for Analytics Professionals (APs) who want to increase their probability of success in implementing analytical solutions. In the past soft skills such as presentation and persuasive writing techniques have been the extent of teaching junior APs how to effectively communicate the value of analytical products. However there are other aspects to success such as trust and experience that may play a more important role in convincing fellow APs clients advisors and leadership groups that their analytic solutions will work. This book introduces the formula ‘Analytics + Trust + Communication + Experience > Convince Them’ to illustrate an AP’s ability to convince a stakeholder. The ‘Convince Me’ stakeholders might be an analytics team member team lead decision-maker or senior leader that are either internal or external to the AP’s organization. Whoever they are this formula represents a concise digestible and above all practical means to increase the likelihood that you will be able to persuade them of the value of your analytical product. Features Includes insight questions to support class discussion Written in broadly non-mathematical terms designed to be accessible to any level of student or practicing AP to read understand and implement the concepts Each section introduces the ideas through real-life case studies GBP 48.99 1
The Cloud Computing Book The Future of Computing Explained This latest textbook from bestselling author Douglas E. Comer is a class-tested book providing a comprehensive introduction to cloud computing. Focusing on concepts and principles rather than commercial offerings by cloud providers and vendors The Cloud Computing Book: The Future of Computing Explained gives readers a complete picture of the advantages and growth of cloud computing cloud infrastructure virtualization automation and orchestration and cloud-native software design. The book explains real and virtual data center facilities including computation (e. g. servers hypervisors Virtual Machines and containers) networks (e. g. leaf-spine architecture VLANs and VxLAN) and storage mechanisms (e. g. SAN NAS and object storage). Chapters on automation and orchestration cover the conceptual organization of systems that automate software deployment and scaling. Chapters on cloud-native software cover parallelism microservices MapReduce controller-based designs and serverless computing. Although it focuses on concepts and principles the book uses popular technologies in examples including Docker containers and Kubernetes. Final chapters explain security in a cloud environment and the use of models to help control the complexity involved in designing software for the cloud. The text is suitable for a one-semester course for software engineers who want to understand cloud and for IT managers moving an organization’s computing to the cloud. | The Cloud Computing Book The Future of Computing Explained GBP 44.99 1
Introduction to Stochastic Level Crossing Techniques Introduction to Stochastic Level Crossing Techniques describes stochastic models and their analysis using the System Point Level Crossing method (abbreviated SPLC or LC). This involves deriving probability density functions (pdfs) or cumulative probability distribution functions (cdfs) of key random variables applying simple level-crossing limit theorems developed by the author. The pdfs and/or cdfs are used to specify operational characteristics about the stochastic model of interest. The chapters describe distinct stochastic models and associated key random variables in the models. For each model a figure of a typical sample path (realization i. e. tracing over time) of the key random variable is displayed. For each model an analytic (Volterra) integral equation for the stationary pdf of the key random variable is created−by inspection of the sample path using the simple LC limit theorems. This LC method bypasses a great deal of algebra usually required by other methods of analysis. The integral equations will be solved directly or computationally. This book is meant for students of mathematics management science engineering natural sciences and researchers who use applied probability. It will also be useful to technical workers in a range of professions. Key Features: A description of one representative stochastic model (e. g. a single-server M/G/1 queue; a multiple server M/M/c queue; an inventory system; etc. ) Construction of a typical sample path of the key random variable of interest (e. g. the virtual waiting time or workload in queues; the net on-hand inventory in inventory systems; etc. ) Statements of the simple LC theorems which connect the sample-path upcrossing and downcrossing rates across state-space levels to simple mathematical functions of the stationary pdf of the key random variable at those state-space levels Creation of (usually Volterra) integral equations for the stationary pdf of the key random variable by inspection of the sample path Direct analytic solution of the integral equations where feasible; or computational solutions of the integral equations Use of the derived stationary pdfs for obtaining operational characteristics of the model GBP 120.00 1
Statistics for Finance Statistics for Finance develops students’ professional skills in statistics with applications in finance. Developed from the authors’ courses at the Technical University of Denmark and Lund University the text bridges the gap between classical rigorous treatments of financial mathematics that rarely connect concepts to data and books on econometrics and time series analysis that do not cover specific problems related to option valuation. The book discusses applications of financial derivatives pertaining to risk assessment and elimination. The authors cover various statistical and mathematical techniques including linear and nonlinear time series analysis stochastic calculus models stochastic differential equations Itō’s formula the Black–Scholes model the generalized method-of-moments and the Kalman filter. They explain how these tools are used to price financial derivatives identify interest rate models value bonds estimate parameters and much more. This textbook will help students understand and manage empirical research in financial engineering. It includes examples of how the statistical tools can be used to improve value-at-risk calculations and other issues. In addition end-of-chapter exercises develop students’ financial reasoning skills. GBP 44.99 1
Inferential Models Reasoning with Uncertainty A New Approach to Sound Statistical ReasoningInferential Models: Reasoning with Uncertainty introduces the authors’ recently developed approach to inference: the inferential model (IM) framework. This logical framework for exact probabilistic inference does not require the user to input prior information. The authors show how an IM produces meaningful prior-free probabilistic inference at a high level. The book covers the foundational motivations for this new IM approach the basic theory behind its calibration properties a number of important applications and new directions for research. It discusses alternative meaningful probabilistic interpretations of some common inferential summaries such as p-values. It also constructs posterior probabilistic inferential summaries without a prior and Bayes’ formula and offers insight on the interesting and challenging problems of conditional and marginal inference. This book delves into statistical inference at a foundational level addressing what the goals of statistical inference should be. It explores a new way of thinking compared to existing schools of thought on statistical inference and encourages you to think carefully about the correct approach to scientific inference. | Inferential Models Reasoning with Uncertainty GBP 44.99 1
Introduction to Linear Algebra Linear algebra provides the essential mathematical tools to tackle all the problems in Science. Introduction to Linear Algebra is primarily aimed at students in applied fields (e. g. Computer Science and Engineering) providing them with a concrete rigorous approach to face and solve various types of problems for the applications of their interest. This book offers a straightforward introduction to linear algebra that requires a minimal mathematical background to read and engage with. Features Presented in a brief informative and engaging style Suitable for a wide broad range of undergraduates Contains many worked examples and exercises GBP 44.99 1
Statistical Simulation Power Method Polynomials and Other Transformations Although power method polynomials based on the standard normal distributions have been used in many different contexts for the past 30 years it was not until recently that the probability density function (pdf) and cumulative distribution function (cdf) were derived and made available. Focusing on both univariate and multivariate nonnormal data generation Statistical Simulation: Power Method Polynomials and Other Transformations presents techniques for conducting a Monte Carlo simulation study. It shows how to use power method polynomials for simulating univariate and multivariate nonnormal distributions with specified cumulants and correlation matrices. The book first explores the methodology underlying the power method before demonstrating this method through examples of standard normal logistic and uniform power method pdfs. It also discusses methods for improving the performance of a simulation based on power method polynomials. The book then develops simulation procedures for systems of linear statistical models intraclass correlation coefficients and correlated continuous variates and ranks. Numerical examples and results from Monte Carlo simulations illustrate these procedures. The final chapter describes how the g-and-h and generalized lambda distribution (GLD) transformations are special applications of the more general multivariate nonnormal data generation approach. Throughout the text the author employs Mathematica® in a range of procedures and offers the source code for download online. Written by a longtime researcher of the power method this book explains how to simulate nonnormal distributions via easy-to-use power method polynomials. By using the methodology and techniques developed in the text readers can evaluate different transformations in terms of comparing percentiles measures of central tendency goodness-of-fit tests and more. | Statistical Simulation Power Method Polynomials and Other Transformations GBP 64.99 1
Tree-Based Methods for Statistical Learning in R Tree-based Methods for Statistical Learning in R provides a thorough introduction to both individual decision tree algorithms (Part I) and ensembles thereof (Part II). Part I of the book brings several different tree algorithms into focus both conventional and contemporary. Building a strong foundation for how individual decision trees work will help readers better understand tree-based ensembles at a deeper level which lie at the cutting edge of modern statistical and machine learning methodology. The book follows up most ideas and mathematical concepts with code-based examples in the R statistical language; with an emphasis on using as few external packages as possible. For example users will be exposed to writing their own random forest and gradient tree boosting functions using simple for loops and basic tree fitting software (like rpart and party/partykit) and more. The core chapters also end with a detailed section on relevant software in both R and other opensource alternatives (e. g. Python Spark and Julia) and example usage on real data sets. While the book mostly uses R it is meant to be equally accessible and useful to non-R programmers. Consumers of this book will have gained a solid foundation (and appreciation) for tree-based methods and how they can be used to solve practical problems and challenges data scientists often face in applied work. Features: Thorough coverage from the ground up of tree-based methods (e. g. CART conditional inference trees bagging boosting and random forests). A companion website containing additional supplementary material and the code to reproduce every example and figure in the book. A companion R package called treemisc which contains several data sets and functions used throughout the book (e. g. there’s an implementation of gradient tree boosting with LAD loss that shows how to perform the line search step by updating the terminal node estimates of a fitted rpart tree). Interesting examples that are of practical use; for example how to construct partial dependence plots from a fitted model in Spark MLlib (using only Spark operations) or post-processing tree ensembles via the LASSO to reduce the number of trees while maintaining or even improving performance. GBP 82.99 1
Statistical Thinking in Clinical Trials Statistical Thinking in Clinical Trials combines a relatively small number of key statistical principles and several instructive clinical trials to gently guide the reader through the statistical thinking needed in clinical trials. Randomization is the cornerstone of clinical trials and randomization-based inference is the cornerstone of this book. Read this book to learn the elegance and simplicity of re-randomization tests as the basis for statistical inference (the analyze as you randomize principle) and see how re-randomization tests can save a trial that required an unplanned mid-course design change. Other principles enable the reader to quickly and confidently check calculations without relying on computer programs. The `EZ’ principle says that a single sample size formula can be applied to a multitude of statistical tests. The `O minus E except after V’ principle provides a simple estimator of the log odds ratio that is ideally suited for stratified analysis with a binary outcome. The same principle can be used to estimate the log hazard ratio and facilitate stratified analysis in a survival setting. Learn these and other simple techniques that will make you an invaluable clinical trial statistician. GBP 82.99 1
Matrix Theory From Generalized Inverses to Jordan Form In 1990 the National Science Foundation recommended that every college mathematics curriculum should include a second course in linear algebra. In answer to this recommendation Matrix Theory: From Generalized Inverses to Jordan Form provides the material for a second semester of linear algebra that probes introductory linear algebra concepts while also exploring topics not typically covered in a sophomore-level class. Tailoring the material to advanced undergraduate and beginning graduate students the authors offer instructors flexibility in choosing topics from the book. The text first focuses on the central problem of linear algebra: solving systems of linear equations. It then discusses LU factorization derives Sylvester's rank formula introduces full-rank factorization and describes generalized inverses. After discussions on norms QR factorization and orthogonality the authors prove the important spectral theorem. They also highlight the primary decomposition theorem Schur's triangularization theorem singular value decomposition and the Jordan canonical form theorem. The book concludes with a chapter on multilinear algebra. With this classroom-tested text students can delve into elementary linear algebra ideas at a deeper level and prepare for further study in matrix theory and abstract algebra. | Matrix Theory From Generalized Inverses to Jordan Form GBP 59.99 1
Multiple Imputation of Missing Data in Practice Basic Theory and Analysis Strategies Multiple Imputation of Missing Data in Practice: Basic Theory and Analysis Strategies provides a comprehensive introduction to the multiple imputation approach to missing data problems that are often encountered in data analysis. Over the past 40 years or so multiple imputation has gone through rapid development in both theories and applications. It is nowadays the most versatile popular and effective missing-data strategy that is used by researchers and practitioners across different fields. There is a strong need to better understand and learn about multiple imputation in the research and practical community. Accessible to a broad audience this book explains statistical concepts of missing data problems and the associated terminology. It focuses on how to address missing data problems using multiple imputation. It describes the basic theory behind multiple imputation and many commonly-used models and methods. These ideas are illustrated by examples from a wide variety of missing data problems. Real data from studies with different designs and features (e. g. cross-sectional data longitudinal data complex surveys survival data studies subject to measurement error etc. ) are used to demonstrate the methods. In order for readers not only to know how to use the methods but understand why multiple imputation works and how to choose appropriate methods simulation studies are used to assess the performance of the multiple imputation methods. Example datasets and sample programming code are either included in the book or available at a github site (https://github. com/he-zhang-hsu/multiple_imputation_book). Key Features Provides an overview of statistical concepts that are useful for better understanding missing data problems and multiple imputation analysis Provides a detailed discussion on multiple imputation models and methods targeted to different types of missing data problems (e. g. univariate and multivariate missing data problems missing data in survival analysis longitudinal data complex surveys etc. ) Explores measurement error problems with multiple imputation Discusses analysis strategies for multiple imputation diagnostics Discusses data production issues when the goal of multiple imputation is to release datasets for public use as done by organizations that process and manage large-scale surveys with nonresponse problems For some examples illustrative datasets and sample programming code from popular statistical packages (e. g. SAS R WinBUGS) are included in the book. For others they are available at a github site (https://github. com/he-zhang-hsu/multiple_imputation_book) | Multiple Imputation of Missing Data in Practice Basic Theory and Analysis Strategies GBP 82.99 1
Flexible Imputation of Missing Data Second Edition Missing data pose challenges to real-life data analysis. Simple ad-hoc fixes like deletion or mean imputation only work under highly restrictive conditions which are often not met in practice. Multiple imputation replaces each missing value by multiple plausible values. The variability between these replacements reflects our ignorance of the true (but missing) value. Each of the completed data set is then analyzed by standard methods and the results are pooled to obtain unbiased estimates with correct confidence intervals. Multiple imputation is a general approach that also inspires novel solutions to old problems by reformulating the task at hand as a missing-data problem. This is the second edition of a popular book on multiple imputation focused on explaining the application of methods through detailed worked examples using the MICE package as developed by the author. This new edition incorporates the recent developments in this fast-moving field. This class-tested book avoids mathematical and technical details as much as possible: formulas are accompanied by verbal statements that explain the formula in accessible terms. The book sharpens the reader’s intuition on how to think about missing data and provides all the tools needed to execute a well-grounded quantitative analysis in the presence of missing data. | Flexible Imputation of Missing Data Second Edition GBP 38.99 1
A Concise Introduction to Pure Mathematics Accessible to all students with a sound background in high school mathematics A Concise Introduction to Pure Mathematics Fourth Edition presents some of the most fundamental and beautiful ideas in pure mathematics. It covers not only standard material but also many interesting topics not usually encountered at this level such as the theory of solving cubic equations; Euler‘s formula for the numbers of corners edges and faces of a solid object and the five Platonic solids; the use of prime numbers to encode and decode secret information; the theory of how to compare the sizes of two infinite sets; and the rigorous theory of limits and continuous functions. New to the Fourth Edition Two new chapters that serve as an introduction to abstract algebra via the theory of groups covering abstract reasoning as well as many examples and applications New material on inequalities counting methods the inclusion-exclusion principle and Euler‘s phi function Numerous new exercises with solutions to the odd-numbered ones Through careful explanations and examples this popular textbook illustrates the power and beauty of basic mathematical concepts in number theory discrete mathematics analysis and abstract algebra. Written in a rigorous yet accessible style it continues to provide a robust bridge between high school and higher-level mathematics enabling students to study more advanced courses in abstract algebra and analysis. GBP 180.00 1
Foundations of Statistics for Data Scientists With R and Python Foundations of Statistics for Data Scientists: With R and Python is designed as a textbook for a one- or two-term introduction to mathematical statistics for students training to become data scientists. It is an in-depth presentation of the topics in statistical science with which any data scientist should be familiar including probability distributions descriptive and inferential statistical methods and linear modeling. The book assumes knowledge of basic calculus so the presentation can focus on why it works as well as how to do it. Compared to traditional mathematical statistics textbooks however the book has less emphasis on probability theory and more emphasis on using software to implement statistical methods and to conduct simulations to illustrate key concepts. All statistical analyses in the book use R software with an appendix showing the same analyses with Python. Key Features: Shows the elements of statistical science that are important for students who plan to become data scientists. Includes Bayesian and regularized fitting of models (e. g. showing an example using the lasso) classification and clustering and implementing methods with modern software (R and Python). Contains nearly 500 exercises. The book also introduces modern topics that do not normally appear in mathematical statistics texts but are highly relevant for data scientists such as Bayesian inference generalized linear models for non-normal responses (e. g. logistic regression and Poisson loglinear models) and regularized model fitting. The nearly 500 exercises are grouped into Data Analysis and Applications and Methods and Concepts. Appendices introduce R and Python and contain solutions for odd-numbered exercises. The book's website (http://stat4ds. rwth-aachen. de/) has expanded R Python and Matlab appendices and all data sets from the examples and exercises. | Foundations of Statistics for Data Scientists With R and Python GBP 82.99 1
Elliptic Operators Topology and Asymptotic Methods Ten years after publication of the popular first edition of this volume the index theorem continues to stand as a central result of modern mathematics-one of the most important foci for the interaction of topology geometry and analysis. Retaining its concise presentation but offering streamlined analyses and expanded coverage of important examples and applications Elliptic Operators Topology and Asymptotic Methods Second Edition introduces the ideas surrounding the heat equation proof of the Atiyah-Singer index theorem. The author builds towards proof of the Lefschetz formula and the full index theorem with four chapters of geometry five chapters of analysis and four chapters of topology. The topics addressed include Hodge theory Weyl's theorem on the distribution of the eigenvalues of the Laplacian the asymptotic expansion for the heat kernel and the index theorem for Dirac-type operators using Getzler's direct method. As a dessert the final two chapters offer discussion of Witten's analytic approach to the Morse inequalities and the L2-index theorem of Atiyah for Galois coverings. The text assumes some background in differential geometry and functional analysis. With the partial differential equation theory developed within the text and the exercises in each chapter Elliptic Operators Topology and Asymptotic Methods becomes the ideal vehicle for self-study or coursework. Mathematicians researchers and physicists working with index theory or supersymmetry will find it a concise but wide-ranging introduction to this important and intriguing field. GBP 180.00 1
Sample Sizes for Clinical Trials Sample Sizes for Clinical Trials Second Edition is a practical book that assists researchers in their estimation of the sample size for clinical trials. Throughout the book there are detailed worked examples to illustrate both how to do the calculations and how to present them to colleagues or in protocols. The book also highlights some of the pitfalls in calculations as well as the key steps that lead to the final sample size calculation. Features: Comprehensive coverage of sample size calculations including Normal binary ordinal and survival outcome data Covers superiority equivalence non-inferiority bioequivalence and precision objectives for both parallel group and crossover designs Highlights how trial objectives impact the study design with respect to both the derivation of sample formulae and the size of the study Motivated with examples of real-life clinical trials showing how the calculations can be applied New edition is extended with all chapters revised some substantially and four completely new chapters on multiplicity cluster trials pilot studies and single arm trials The book is primarily aimed at researchers and practitioners of clinical trials and biostatistics and could be used to teach a course on sample size calculations. The importance of a sample size calculation when designing a clinical trial is highlighted in the book. It enables readers to quickly find an appropriate sample size formula with an associated worked example complemented by tables to assist in the calculations. GBP 89.99 1
Combinatorial Nullstellensatz With Applications to Graph Colouring Combinatorial Nullstellensatz is a novel theorem in algebra introduced by Noga Alon to tackle combinatorial problems in diverse areas of mathematics. This book focuses on the applications of this theorem to graph colouring. A key step in the applications of Combinatorial Nullstellensatz is to show that the coefficient of a certain monomial in the expansion of a polynomial is nonzero. The major part of the book concentrates on three methods for calculating the coefficients: Alon-Tarsi orientation: The task is to show that a graph has an orientation with given maximum out-degree and for which the number of even Eulerian sub-digraphs is different from the number of odd Eulerian sub-digraphs. In particular this method is used to show that a graph whose edge set decomposes into a Hamilton cycle and vertex-disjoint triangles is 3-choosable and that every planar graph has a matching whose deletion results in a 4-choosable graph. Interpolation formula for the coefficient: This method is in particular used to show that toroidal grids of even order are 3-choosable r-edge colourable r-regular planar graphs are r-edge choosable and complete graphs of order p+1 where p is a prime are p-edge choosable. Coefficients as the permanents of matrices: This method is in particular used in the study of the list version of vertex-edge weighting and to show that every graph is (2 3)-choosable. It is suited as a reference book for a graduate course in mathematics. | Combinatorial Nullstellensatz With Applications to Graph Colouring GBP 52.99 1
Sample Size Calculations in Clinical Research Praise for the Second Edition:… this is a useful comprehensive compendium of almost every possible sample size formula. The strong organization and carefully defined formulae will aid any researcher designing a study. BiometricsThis impressive book contains formulae for computing sample size in a wide range of settings. One-sample studies and two-sample comparisons for quantitative binary and time-to-event outcomes are covered comprehensively with separate sample size formulae for testing equality non-inferiority and equivalence. Many less familiar topics are also covered … – Journal of the Royal Statistical SocietySample Size Calculations in Clinical Research Third Edition presents statistical procedures for performing sample size calculations during various phases of clinical research and development. A comprehensive and unified presentation of statistical concepts and practical applications this book includes a well-balanced summary of current and emerging clinical issues regulatory requirements and recently developed statistical methodologies for sample size calculation. Features:Compares the relative merits and disadvantages of statistical methods for sample size calculationsExplains how the formulae and procedures for sample size calculations can be used in a variety of clinical research and development stagesPresents real-world examples from several therapeutic areas including cardiovascular medicine the central nervous system anti-infective medicine oncology and women’s healthProvides sample size calculations for dose response studies microarray studies and Bayesian approachesThis new edition is updated throughout includes many new sections and five new chapters on emerging topics: two stage seamless adaptive designs cluster randomized trial design zero-inflated Poisson distribution clinical trials with extremely low incidence rates and clinical trial simulation. GBP 38.99 1
Drug Development for Rare Diseases A disease is defined as rare if the prevalence is fewer than 200 000 in the United States. It is estimated that there are more than 7 000 rare diseases which collectively affect 30 million Americans or 10% of the US population. This diverse and complex disease area poses challenges for patients caregivers regulators drug developers and other stakeholders. This book is proposed to give an overview of the common issues facing rare disease drug developers summarize challenges specific to clinical development in small populations discuss drug development strategies in the evolving regulatory environment explain generation and utilization of different data and evidence inside and beyond clinical trials and use recent examples to demonstrate these challenges and the development strategies that respond to the challenges. Key Features: • Rare disease. • Drug development. • Innovative clinical trial design. • Regulatory approval. • Real-world evidence. | Drug Development for Rare Diseases GBP 120.00 1
Equivalence and Noninferiority Tests for Quality Manufacturing and Test Engineers In engineering and quality control various situations including process validation and design verification require equivalence and noninferiority tests. Equivalence and Noninferiority Tests for Quality Manufacturing and Test Engineers presents methods for using validation and verification test data to demonstrate equivalence and noninferiority in engineering and applied science. The book covers numerous tests drawn from the author’s more than 30 years of work in a range of industrial settings. It provides computational formulas for the tests methods to determine or justify sample sizes and formulas to calculate power and operating characteristic curves. The methods are accessible using standard statistical software and do not require complicated programming. The book also includes computer code and screen shots for SAS R and JMP. This book provides you with a guide to performing validation and verification tests that demonstrate the adequacy of your process system or product. It will help you choose the best test for your application. GBP 59.99 1
Analyzing High-Dimensional Gene Expression and DNA Methylation Data with R Analyzing high-dimensional gene expression and DNA methylation data with R is the first practical book that shows a ``pipeline of analytical methods with concrete examples starting from raw gene expression and DNA methylation data at the genome scale. Methods on quality control data pre-processing data mining and further assessments are presented in the book and R programs based on simulated data and real data are included. Codes with example data are all reproducible. Features: · Provides a sequence of analytical tools for genome-scale gene expression data and DNA methylation data starting from quality control and pre-processing of raw genome-scale data. · Organized by a parallel presentation with explanation on statistical methods and corresponding R packages/functions in quality control pre-processing and data analyses (e. g. clustering and networks). · Includes source codes with simulated and real data to reproduce the results. Readers are expected to gain the ability to independently analyze genome-scaled expression and methylation data and detect potential biomarkers. This book is ideal for students majoring in statistics biostatistics and bioinformatics and researchers with an interest in high dimensional genetic and epigenetic studies. GBP 66.99 1
Surrogates Gaussian Process Modeling Design and Optimization for the Applied Sciences Surrogates: a graduate textbook or professional handbook on topics at the interface between machine learning spatial statistics computer simulation meta-modeling (i. e. emulation) design of experiments and optimization. Experimentation through simulation human out-of-the-loop statistical support (focusing on the science) management of dynamic processes online and real-time analysis automation and practical application are at the forefront. Topics include:Gaussian process (GP) regression for flexible nonparametric and nonlinear modeling. Applications to uncertainty quantification sensitivity analysis calibration of computer models to field data sequential design/active learning and (blackbox/Bayesian) optimization under uncertainty. Advanced topics include treed partitioning local GP approximation modeling of simulation experiments (e. g. agent-based models) with coupled nonlinear mean and variance (heteroskedastic) models. Treatment appreciates historical response surface methodology (RSM) and canonical examples but emphasizes contemporary methods and implementation in R at modern scale. Rmarkdown facilitates a fully reproducible tour complete with motivation from application to and illustration with compelling real-data examples. Presentation targets numerically competent practitioners in engineering physical and biological sciences. Writing is statistical in form but the subjects are not about statistics. Rather they’re about prediction and synthesis under uncertainty; about visualization and information design and decision making computing and clean code. | Surrogates Gaussian Process Modeling Design and Optimization for the Applied Sciences GBP 38.99 1
Innovative Methods for Rare Disease Drug Development In the United States a rare disease is defined by the Orphan Drug Act as a disorder or condition that affects fewer than 200 000 persons. For the approval of orphan drug products for rare diseases the traditional approach of power analysis for sample size calculation is not feasible because there are only limited number of subjects available for clinical trials. In this case innovative approaches are needed for providing substantial evidence meeting the same standards for statistical assurance as drugs used to treat common conditions. Innovative Methods for Rare Disease Drug Development focuses on biostatistical applications in terms of design and analysis in pharmaceutical research and development from both regulatory and scientific (statistical) perspectives. Key Features: Reviews critical issues (e. g. endpoint/margin selection sample size requirements and complex innovative design). Provides better understanding of statistical concepts and methods which may be used in regulatory review and approval. Clarifies controversial statistical issues in regulatory review and approval accurately and reliably. Makes recommendations to evaluate rare diseases regulatory submissions. Proposes innovative study designs and statistical methods for rare diseases drug development including n-of-1 trial design adaptive trial design and master protocols like platform trials. Provides insight regarding current regulatory guidance on rare diseases drug development like gene therapy. GBP 44.99 1