Previous versions (as known to CRANberries) which should be available via the Archive link are:
2010-03-09 1.0
2008-06-08 0.4
2007-05-10 0.3-4
Previous versions (as known to CRANberries) which should be available via the Archive link are:
2010-03-09 1.0
2008-06-08 0.4
2007-05-10 0.3-4
Previous versions (as known to CRANberries) which should be available via the Archive link are:
2015-03-27 1.10
The typical way to deal with zeros and missing values in compositional data sets is to impute them with a reasonable value, and then the desired statistical model is estimated with the imputed data set, e.g., a regression model. This contribution aims at presenting alternative approaches to this problem within the framework of Bayesian regression with a compositional response. In the first step, a compositional data set with missing data is considered to follow a normal distribution on the simplex, which mean value is given as an Aitchison affine linear combination of some fully observed explanatory variables. Both the coefficients of this linear combination and the missing values can be estimated with standard Gibbs sampling techniques. In the second step, a normally distributed additive error is considered superimposed on the compositional response, and values are taken as ‘below the detection limit’ (BDL) if they are ‘too small’ in comparison with the additive standard deviation of each variable. Within this framework, the regression parameters and all missing values (including BDL) can be estimated with a Metropolis-Hastings algorithm. Both methods estimate the regression coefficients without need of any preliminary imputation step, and adequately propagate the uncertainty derived from the fact that the missing values and BDL are not actually observed, something imputation methods cannot achieve.
by van den Boogaart, K. G., Tolosana-Delgado, R., Templ, M. at March 26, 2015 05:51 AM
Compositional data analysis usually deals with relative information between parts where the total (abundances, mass, amount, etc.) is unknown or uninformative. This article addresses the question of what to do when the total is known and is of interest. Tools used in this case are reviewed and analysed, in particular the relationship between the positive orthant of D-dimensional real space, the product space of the real line times the D-part simplex, and their Euclidean space structures. The first alternative corresponds to data analysis taking logarithms on each component, and the second one to treat a log-transformed total jointly with a composition describing the distribution of component amounts. Real data about total abundances of phytoplankton in an Australian river motivated the present study and are used for illustration.
by Pawlowsky-Glahn, V., Egozcue, J. J., Lovell, D. at March 26, 2015 05:51 AM
Compositional data analysis deals with situations where the relevant information is contained only in the ratios between the measured variables, and not in the reported values. This article focuses on high-dimensional compositional data (in the sense of hundreds or even thousands of variables), as they appear in chemometrics (e.g., mass spectral data), proteomics or genomics. The goal of this contribution is to perform a dimension reduction of such data, where the new directions should allow for interpretability. An approach named principal balances turned out to be successful for low dimensions. Here, the concept of sparse principal component analysis is proposed for constructing principal directions, the so-called sparse principal balances. They are sparse (contain many zeros), build an orthonormal basis in the sample space of the compositional data, are efficient for dimension reduction and are applicable to high-dimensional data.
by Mert, M. C., Filzmoser, P., Hron, K. at March 26, 2015 05:51 AM
R has excellent support for dates and times via the built-in Date
and POSIXt
classes. Their usage, however, is not always as straightforward as one
would want. Certain conversions are more cumbersome than we would like: while
as.Date("2015-03-22")
, would it not be nice if as.Date("20150322")
(a
format often used in logfiles) also worked, or for that matter
as.Date(20150322L)
using an integer variable, or even
as.Date("2015-Mar-22")
and as.Date("2015Mar22")
?
Similarly, many date and time formats suitable for POSIXct
(the short form)
and POSIXlt
(the long form with accessible components) often require rather too
much formatting, and/or defaults. Why for example does
as.POSIXct(as.numeric(Sys.time()), origin="1970-01-01")
require the
origin
argument on the conversion back (from fractional seconds since the
epoch) into datetime—when it is not required when creating the
double-precision floating point representation of time since the epoch?
But thanks to Boost and its excellent
Boost Date_Time
library—which we already mentioned in
this post about the BH package— we can
address parsing of dates and times. It permitted us to write a new function
toPOSIXct()
which now part of the
RcppBDT package (albeit right
now just the GitHub version but we
expect this to migrate to CRAN “soon” as well).
We will now discuss the outline of this implementation. For full details, see the source file.
#include <boost/date_time.hpp>
#include <boost/lexical_cast.hpp>
#include <Rcpp.h>
// [[Rcpp::depends(BH)]]
namespace bt = boost::posix_time;
const std::locale formats[] = { // this shows a subset only, see the source file for full list
std::locale(std::locale::classic(), new bt::time_input_facet("%Y-%m-%d %H:%M:%S%f")),
std::locale(std::locale::classic(), new bt::time_input_facet("%Y/%m/%d %H:%M:%S%f")),
std::locale(std::locale::classic(), new bt::time_input_facet("%Y-%m-%d")),
std::locale(std::locale::classic(), new bt::time_input_facet("%b/%d/%Y")),
};
const size_t nformats = sizeof(formats)/sizeof(formats[0]);
Note that we show only two datetime formats along with two date formats. The actual implementation has many more.
The actual conversion from string to a double (the underlying format in
POSIXct
) is done by the following function. It loops over all given
formats, and returns the computed value after the first match. In case of
failure, a floating point NA
is returned.
double stringToTime(const std::string s) {
bt::ptime pt, ptbase;
// loop over formats and try them til one fits
for (size_t i=0; pt == ptbase && i < nformats; ++i) {
std::istringstream is(s);
is.imbue(formats[i]);
is >> pt;
}
if (pt == ptbase) {
return NAN;
} else {
const bt::ptime timet_start(boost::gregorian::date(1970,1,1));
bt::time_duration diff = pt - timet_start;
// Define BOOST_DATE_TIME_POSIX_TIME_STD_CONFIG to use nanoseconds
// (and then use diff.total_nanoseconds()/1.0e9; instead)
return diff.total_microseconds()/1.0e6;
}
}
We want to be able to convert from numeric as well as string formats. For
this, we write a templated (and vectorised) function which invokes the actual
conversion function for each argument. It also deals (somewhat
heuristically) with two corner cases: we want 20150322
be converted from
either integer or numeric, but need in the latter case distinguish this value
and its rangue from the (much larger) value for seconds since the epoch.
That creates a minir ambiguity: we will not be able to convert back for inputs
from seconds since the epoch for the first few years since January 1, 1970.
But as these are rare in the timestamp form we can accept the trade-off.
template <int RTYPE>
Rcpp::DatetimeVector toPOSIXct_impl(const Rcpp::Vector<RTYPE>& sv) {
int n = sv.size();
Rcpp::DatetimeVector pv(n);
for (int i=0; i<n; i++) {
std::string s = boost::lexical_cast<std::string>(sv[i]);
//Rcpp::Rcout << sv[i] << " -- " << s << std::endl;
// Boost Date_Time gets the 'YYYYMMDD' format wrong, even
// when given as an explicit argument. So we need to test here.
// While we are at it, may as well test for obviously wrong data.
int l = s.size();
if ((l < 8) || // impossibly short
(l == 9)) { // 8 or 10 works, 9 cannot
Rcpp::stop("Inadmissable input: %s", s);
} else if (l == 8) { // turn YYYYMMDD into YYYY/MM/DD
s = s.substr(0, 4) + "/" + s.substr(4, 2) + "/" + s.substr(6,2);
}
pv[i] = stringToTime(s);
}
return pv;
}
Finally, we can look at the user-facing function. It accepts input in either integer, numeric or character vector form, and then dispatches accordingly to the templated internal function we just discussed. Other inputs are unsuitable and trigger an error.
// [[Rcpp::export]]
Rcpp::DatetimeVector toPOSIXct(SEXP x) {
if (Rcpp::is<Rcpp::CharacterVector>(x)) {
return toPOSIXct_impl<STRSXP>(x);
} else if (Rcpp::is<Rcpp::IntegerVector>(x)) {
return toPOSIXct_impl<INTSXP>(x);
} else if (Rcpp::is<Rcpp::NumericVector>(x)) {
// here we have two cases: either we are an int like
// 200150315 'mistakenly' cast to numeric, or we actually
// are a proper large numeric (ie as.numeric(Sys.time())
Rcpp::NumericVector v(x);
if (v[0] < 21990101) { // somewhat arbitrary cuttoff
// actual integer date notation: convert to string and parse
return toPOSIXct_impl<REALSXP>(x);
} else {
// we think it is a numeric time, so treat it as one
return Rcpp::DatetimeVector(x);
}
} else {
Rcpp::stop("Unsupported Type");
return R_NilValue;//not reached
}
}
A simply illustration follows. A fuller demonstration is part of the RcppBDT package. This already shows support for subsecond granularity and a variety of date formats.
## parsing character
s <- c("2004-03-21 12:45:33.123456", # ISO
"2004/03/21 12:45:33.123456", # variant
"20040321", # just dates work fine as well
"Mar/21/2004", # US format, also support month abbreviation or full
"rapunzel") # will produce a NA
p <- toPOSIXct(s)
options("digits.secs"=6) # make sure we see microseconds in output
print(format(p, tz="UTC")) # format UTC times as UTC (helps for Date types too)
[1] "2004-03-21 12:45:33.123456" "2004-03-21 12:45:33.123456" [3] "2004-03-21 00:00:00.000000" "2004-03-21 00:00:00.000000" [5] NA
We can also illustrate integer and numeric inputs:
## parsing integer types
s <- c(20150315L, 20010101L, 20141231L)
p <- toPOSIXct(s)
print(format(p, tz="UTC"))
[1] "2015-03-15" "2001-01-01" "2014-12-31"
## parsing numeric types
s <- c(20150315, 20010101, 20141231)
print(format(p, tz="UTC"))
[1] "2015-03-15" "2001-01-01" "2014-12-31"
Note that we always forced display using UTC rather local time, the R default.
Vol. 64, Issue 11, Mar 2015
Abstract:
Empirical analysis of statistical algorithms often demands time-consuming experiments. We present two R packages which greatly simplify working in batch computing environments. The package BatchJobs implements the basic objects and procedures to control any batch cluster from within R. It is structured around cluster versions of the well-known higher order functions Map, Reduce and Filter from functional programming. Computations are performed asynchronously and all job states are persistently stored in a database, which can be queried at any point in time. The second package, BatchExperiments, is tailored for the still very general scenario of analyzing arbitrary algorithms on problem instances. It extends package BatchJobs by letting the user define an array of jobs of the kind “apply algorithm A to problem instance P and store results”. It is possible to associate statistical designs with parameters of problems and algorithms and therefore to systematically study their influence on the results.
The packages’ main features are: (a) Convenient usage: All relevant batch system operations are either handled internally or mapped to simple R functions. (b) Portability: Both packages use a clear and well-defined interface to the batch system which makes them applicable in most high-performance computing environments. (c) Reproducibility: Every computational part has an associated seed to ensure reproducibility even when the underlying batch system changes. (d) Abstraction and good software design: The code layers for algorithms, experiment definitions and execution are cleanly separated and enable the writing of readable and maintainable code.
Vol. 64, Issue 10, Mar 2015
Abstract:
Mathematical models of disease progression predict disease outcomes and are useful epidemiological tools for planners and evaluators of health interventions. The R package gems is a tool that simulates disease progression in patients and predicts the effect of different interventions on patient outcome. Disease progression is represented by a series of events (e.g., diagnosis, treatment and death), displayed in a directed acyclic graph. The vertices correspond to disease states and the directed edges represent events. The package gems allows simulations based on a generalized multistate model that can be described by a directed acyclic graph with continuous transition-specific hazard functions. The user can specify an arbitrary hazard function and its parameters. The model includes parameter uncertainty, does not need to be a Markov model, and may take the history of previous events into account. Applications are not limited to the medical field and extend to other areas where multistate simulation is of interest. We provide a technical explanation of the multistate models used by gems, explain the functions of gems and their arguments, and show a sample application.
Vol. 64, Issue 9, Mar 2015
Abstract:
One-way layouts, i.e., a single factor with several levels and multiple observations at each level, frequently arise in various fields. Usually not only a global hypothesis is of interest but also multiple comparisons between the different treatment levels. In most practical situations, the distribution of observed data is unknown and there may exist a number of atypical measurements and outliers. Hence, use of parametric and semiparametric procedures that impose restrictive distributional assumptions on observed samples becomes questionable. This, in turn, emphasizes the demand on statistical procedures that enable us to accurately and reliably analyze one-way layouts with minimal conditions on available data. Nonparametric methods offer such a possibility and thus become of particular practical importance. In this article, we introduce a new R package nparcomp which provides an easy and user-friendly access to rank-based methods for the analysis of unbalanced one-way layouts. It provides procedures performing multiple comparisons and computing simultaneous confidence intervals for the estimated effects which can be easily visualized. The special case of two samples, the nonparametric Behrens-Fisher problem, is included. We illustrate the implemented procedures by examples from biology and medicine.
Vol. 64, Issue 8, Mar 2015
Abstract:
The R package multgee implements the local odds ratios generalized estimating equations (GEE) approach proposed by Touloumis, Agresti, and Kateri (2013), a GEE approach for correlated multinomial responses that circumvents theoretical and practical limitations of the GEE method. A main strength of multgee is that it provides GEE routines for both ordinal (ordLORgee) and nominal (nomLORgee) responses, while relevant other softwares in R and SAS are restricted to ordinal responses under a marginal cumulative link model specification. In addition, multgee offers a marginal adjacent categories logit model for ordinal responses and a marginal baseline category logit model for nominal responses. Further, utility functions are available to ease the local odds ratios structure selection (intrinsic.pars) and to perform a Wald type goodness-of-fit test between two nested GEE models (waldts). We demonstrate the application of multgee through a clinical trial with clustered ordinal multinomial responses.
A new minor release of littler is available now.
It adds or extends a number of things:
added support for drat by adding a new example installDrat.r
;
the install.r
, install2.r
and check.r
scripts now use getOption("repos")
to set the default repos; this works well with drat and multiple repos set via, e.g. ~/.littler.r
or /etc/littler.r
;
added support for installing Debian binaries as part of a check.r
run, this can be particularly useful for one-command checks as done by some of the Rocker containers;
added support for reproducible builds: if REPRODUCIBLE_BUILD
is defined, no date and time stamp is added to the binary;
added new command-line option -L|--libpath
to expand the library path used for packages;
added support for setting multiple repos from the command-line in the install2.r
script;
the manual page was updated with respect to recent additions;
a link to the examples web page was added to the --usage
output display;
See the littler examples page for more details.
Full details for the littler release are provided as usual at the ChangeLog page.
The code is available via the GitHub repo, from tarballs off my littler page and the local directory here. A fresh package has gone to the incoming queue at Debian; Michael Rutter will probably have new Ubuntu binaries at CRAN in a few days too.
Comments and suggestions are welcome via the mailing list or issue tracker at the GitHub repo.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
Editorial Note: The following post was kindly contributed by Steven Pav.
After playing around with drat for a few days now, my impressions of it are best captured by Dirk's quote:
It just works.
To get some idea of what I mean by this, suppose you are a happy consumer of R packages, but want access to, say, the latest, greatest releases of my distribution package, sadist. You can simply add the following to your .Rprofile
file:
drat::add("shabbychef")
After this, you instantly have access to new releases in the github/shabbychef drat store via the package tools you already know and tolerate. You can use
install.package('sadists')
to install the sadists package from the drat store, for example. Similarly, if you issue
update.packages(ask=FALSE)
all the drat stores you have added will be checked for package updates, along with their dependencies which may well come from other repositories including CRAN.
The most obvious use cases are:
Micro releases. For package authors, this provides a means to get feedback from the early adopters, but also allows one to push small changes and bug fixes without burning through your CRAN karma (if you have any left). My personal drat store tends to be a few minor releases ahead of my CRAN releases.
Local repositories. In my professional life, I write and maintain proprietary packages. Pushing package updates used to involve saving the package .tar.gz to a NAS, then calling something like R CMD INSTALL package_name_0.3.1.9001.tar.gz
. This is not something I wanted to ask of my colleagues. With drat, they can instead add the following stanza to .Rprofile: drat:::addRepo('localRepo','file:///mnt/NAS/r/local/drat')
, and then rely on update.packages
to do the rest.
I suspect that in the future, drat might be (ab)used in the following ways:
Rolling your own vanilla CRAN mirror, though I suspect there are better existing ways to accomplish this.
Patching CRAN. Suppose you found a bug in a package on CRAN (inconceivable!). As it stands now, you email the maintainer, and wait for a fix. Maybe the patch is trivial, but suppose it is never delivered. Now, you can simply make the patch yourself, pick a higher revision number, and stash it in your drat store. The only downside is that eventually the package maintainer might bump their revision number without pushing a fix, and you are stuck in an arms race of version numbers.
Forgoing CRAN altogether. While some package maintainers might find this attractive, I think I would prefer a single huge repository, warts and all, to a landscape of a million microrepos. Perhaps some enterprising group will set up a CRAN-like drat store on github, and accept packages by pull request (whether github CDN can or will support the traffic that CRAN does is another matter), but this seems a bit too futuristic for me now.
In exchange for writing this blog post, I get to lobby Dirk for some features in drat:
I shudder at the thought of hundreds of tiny drat stores. Perhaps there should be a way to aggregate addRepo
commands in some way. This would allow curators to publish their suggested lists of repos.
Drat stores are served in the gh-pages
branch of a github repo. I wish there were some way to keep the index.html file in that directory reflect the packages present in the sources. Maybe this could be achieved with some canonical RMarkdown code that most people use.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
While the population average treatment effect has been the subject of extensive methods and applied research, less consideration has been given to the sample average treatment effect: the mean difference in the counterfactual outcomes for the study units. The sample parameter is easily interpretable and is arguably the most relevant when the study units are not representative of a greater population or when the exposure's impact is heterogeneous. Formally, the sample effect is not identifiable from the observed data distribution. Nonetheless, targeted maximum likelihood estimation (TMLE) can provide an asymptotically unbiased and efficient estimate of both the population and sample parameters. In this paper, we study the asymptotic and finite sample properties of the TMLE for the sample effect and provide a conservative variance estimator. In most settings, the sample parameter can be estimated more efficiently than the population parameter. Finite sample simulations illustrate the potential gains in precision and power from selecting the sample effect as the target of inference. As a motivating example, we discuss the Sustainable East Africa Research in Community Health (SEARCH) study, an ongoing cluster randomized trial for HIV prevention and treatment.
The primary analysis in many randomized controlled trials focuses on the average treatment effect and does not address whether treatment benefits are widespread or limited to a select few. This problem affects many disease areas, since it stems from how randomized trials, often the gold standard for evaluating treatments, are designed and analyzed. Our goal is to estimate the fraction who benefit from a treatment, based on randomized trial data. We consider cases where the primary outcome is continuous, discrete, or ordinal. In general, the fraction who benefit is a non-identifiable parameter, and the best that can be obtained are sharp lower and upper bounds on it. We develop a method to estimate these bounds using a novel application of linear programming, which allows fast implementation. MATLAB software is provided. The method can incorporate information from prognostic baseline variables in order to improve precision, without requiring parametric model assumptions. Also, assumptions based on subject matter knowledge can be incorporated to improve the bounds. We apply our general method to estimate lower and upper bounds on the fraction who benefit from a new surgical intervention for stroke.
Quantitative T_{1} maps (qT_{1}) are often used to study diffuse tissue abnormalities that may be difficult to assess on standard clinical sequences. While qT_{1} maps can provide valuable information for studying the progression and treatment of diseases like multiple sclerosis, the additional scan time required and multi-site implementation issues have limited their inclusion in many standard clinical and research protocols. Hence, the availability of qT_{1} maps has historically been limited.
In this paper, we propose a new method of estimating T_{1} maps retroactively that only requires the acquisition or availability of four conventional MRI sequences. For these sequences, we employ a novel normalization method using cerebellar gray matter as a reference tissue, which allows diffuse differences in cerebral normal-appearing white matter (NAWM) to be detected. We use a regression model, fit separately to each tissue class, that relates the normalized intensities of each sequence to the acquired qT_{1} map value at each voxel using smooth functions. We test our model on a set of 63 subjects, including primary progressive (PPMS), relapsing-remitting (RRMS) and secondary progressive multiple sclerosis (SPMS) patients and healthy controls, and generate statistical qT_{1} maps using cross-validation. We find the estimation error of these maps to be similar to the measurement error of the acquired qT_{1} maps, and we find the prediction error of the statistical and acquired qT_{1} maps to be similar. Visually, the statistical qT_{1} maps are similar to but less noisy than the acquired qT_{1} maps. Nonparametric tests of group differences in NAWM relative to healthy controls show similar results whether acquired or statistical qT_{1} maps are used, but the statistical qT_{1} maps have more power to detect group differences than the acquired maps.
by Al-Ahmadgaid Asaad (noreply@blogger.com) at March 06, 2015 11:27 AM
The new release 0.11.5 of Rcpp just reached the CRAN network for GNU R, and a Debian package has also been be uploaded.
Rcpp has become the most popular way of enhancing GNU R with C++ code. As of today, 345 packages on CRAN depend on Rcpp for making analyses go faster and further; BioConductor adds another 41 packages, and casual searches on GitHub suggests dozens mores.
This release continues the 0.11.* release cycle, adding another large number of small bug fixes, polishes and enhancements. Since the previous release in January, we incorporated a number of pull requests and changes from several contributors. This time, JJ deserves a special mention as he is responsible for a metric ton of the changes listed below, making Rcpp Attributes even more awesome. As always, you can follow the development via the GitHub repo and particularly the Issue tickets and Pull Requests. And any discussions, questions, ... regarding Rcpp are always welcome at the rcpp-devel mailing list.
See below for a detailed list of changes extracted from the NEWS
file.
Changes in Rcpp version 0.11.5 (2015-03-04)
Changes in Rcpp API:
An error handler for tinyformat was defined to prevent the
assert()
macro from spilling.The
Rcpp::warning
function was added as a wrapper forRf_warning
.The
XPtr
class was extended with newchecked_get
andrelease
functions as well as improved behavior (throw an exception rather than crash) when a NULL external pointer is dereferenced.R code is evaluated within an
R_toplevelExec
block to prevent user interrupts from bypassing C++ destructors on the stack.The
Rcpp::Environment
constructor can now use a supplied parent environment.The
Rcpp::Function
constructor can now use a supplied environment or namespace.The
attributes_hidden
macro from R is used to shield internal functions; theR_ext/Visibility.h
header is now included as well.A
Rcpp::print
function was added as a wrapper aroundRf_PrintValue
.Changes in Rcpp Attributes:
The
pkg_types.h
file is now included inRcppExports.cpp
if it is present in either theinst/include
orsrc
.
sourceCpp
was modified to allow includes of local files (e.g.#include "foo.hpp"
). Implementation files (.cc; .cpp) corresponding to local includes are also automatically built if they exist.The generated attributes code was simplified with respect to
RNGScope
and now usesRObject
and its destructor rather thanSEXP
protect/unprotect.Support addition of the
rng
parameter inRcpp::export
to suppress the otherwise automatic inclusion ofRNGScope
in generated code.Attributes code was made more robust and can e.g. no longer recurse.
Version 3.2 of the Rtools is now correctly detected as well.
Allow 'R' to come immediately after '***' for defining embedded R code chunks in sourceCpp.
The attributes vignette has been updated with documentation on new features added over the past several releases.
Changes in Rcpp tests:
On Travis CI, all build dependencies are installed as binary
.deb
packages resulting in faster tests.
Thanks to CRANberries, you can also look at a diff to the previous release As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
A few weeks ago we introduced the drat package. Its name stands for drat R Archive Template, and it helps with easy-to-create and easy-to-use repositories for R packages. Two early blog posts describe drat: First Steps Towards Lightweight Repositories, and Publishing a Package.
A new version 0.0.2 is now on CRAN. It adds several new features:
Courtesy of CRANberries, there is a comparison to the previous release. More detailed information is on the drat page.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
Colour Name | Hexadecimal |
---|---|
Table 1: Colours Used in the Chart. | |
Dark Violet | #552683 |
Dark Yellow | #E7A922 |
White | #FFFFFF |
Gray (Infographic Text) | #A9A8A7 |
Dark Yellow (Crime Text) | #CA8B01 |
y1
in the data frame, dat
, in three groupings, grp
. Note that the plot you'll obtain will not be the same with the one below since the data changes every time we run the simulation above. theme
function. One of the elements in the plot that will be tweaked is the font. To deal with this we need to import the fonts using the extrafont package. That is,kobe_theme
since if you recall from my previous article, the above chart is inspired by Kobe Bryant Infographic. So applying this to the plot we'll have the following, p1 + kobe_theme()
. If in case you want to reorder the ticks in the x-axis, by starting with A from the top and ending with L in the bottom, simply run the following,y2
from dat
data frame, this time using the line plot. kobe_theme
, will give us p2 + kobe_theme()
. We should expect this since the kobe_theme
that was applied in the bar plot with coord_flip
option enabled, affects the orientation of the grids. So instead, we do a little tweak on the current theme, and see for yourself the difference:y3
variable is plotted using the following codes:kobe_theme2()
, grid.newpage()
;grid.text
function. The position of objects/elements such as texts in the grid is defined by the (x, y) coordinates. The bound of the grid by default is a unit square, of course the aspect ratio of the square can be modified. So the support of x and y is $[0,1]^2$;vplayout
function for the coordinates of the placeholder, and print
for pasting. Say we want to insert the first plot in first row, second column, we code it this way by Al-Ahmadgaid Asaad (noreply@blogger.com) at February 27, 2015 11:58 AM
by Al-Ahmadgaid Asaad (noreply@blogger.com) at February 25, 2015 10:02 AM
RCall
: Running an embedded R in JuliaRCall
.RCall
is to access datasets from R and R packages, to fit models that are not currently implemented in Julia and to use R graphics, especially the ggplot2 and lattice packages. Unfortunately I am not currently able to start a graphics device from the embedded R but I expect that to be fixed soon.RCall
although it may not mean much if you haven't tried to do this kind of thing. It is written entirely in Julia. There is absolutely no "glue" code written in a compiled language like C or C++. As I said, this may not mean much to you unless you have tried to do something like this, in which case it is astonishing.by Douglas Bates (noreply@blogger.com) at February 24, 2015 11:05 PM
pandas
module which is a python data analysis library. The read_csv
function can read data both locally and from the web.print(head(df))
which prints the first six rows of the data, and print(tail(df))
-- the last six rows of the data, respectively. In Python, however, the number of rows for head of the data by default is 5 unlike in R, which is 6. So that the equivalent of the R code head(df, n = 10)
in Python, is df.head(n = 10)
. Same goes for the tail of the data.colnames
and rownames
functions in R, respectively. In Python, we extract it using the columns
and index
attributes. That is,T
method, sort
attribute. Now let's extract a specific column. In Python, we do it using either iloc
or ix
attributes, but ix
is more robust and thus I prefer it. Assuming we want the head of the first column of the data, we have print df.ix[10:20, ['Abra', 'Apayao', 'Benguet']]
drop
attribute. That is, axis
argument above tells the function to drop with respect to columns, if axis = 0
, then the function drops with respect to rows.describe
attribute: ttest_1samp
function. So that, if we want to test the mean of the Abra's volume of palay production against the null hypothesis with 15000 assumed population mean of the volume of palay production, we have def
function. For example, say we define a function that will add two numbers, we do it as follows,{...}
. Now here's an algorithm from my previous post, by Al-Ahmadgaid Asaad (noreply@blogger.com) at February 22, 2015 12:35 PM
We develop a simulation procedure to simulate the semicompeting risk survival data. In addition, we introduce an EM algorithm and a B–spline based estimation procedure to evaluate and implement Xu et al. (2010)’s nonparametric likelihood es- timation approach. The simulation procedure provides a route to simulate samples from the likelihood introduced in Xu et al. (2010)’s. Further, the EM algorithm and the B–spline methods stabilize the estimation and gives accurate estimation results. We illustrate the simulation and the estimation procedure with simluation examples and real data analysis.
by Zachary Deane-Mayer (noreply@blogger.com) at January 16, 2015 10:22 PM
by Gregor Gorjanc (noreply@blogger.com) at January 15, 2015 10:16 PM
The purpose of this post is to show how to use Boost::Geometry library which was introduced recently in Rcpp. Especially, we focus on R-tree data structure for searching objects in space because only one spatial index is implemented - R-tree Currently in this library.
Boost.Geometry which is part of the Boost C++ Libraries gives us algorithms for solving geometry problems. In this library, the Boost.Geometry.Index which is one of components is intended to gather data structures called spatial indexes which are often used to searching for objects in space quickly. Generally speaking, spatial indexes stores geometric objects’ representations and allows searching for objects occupying some space or close to some point in space.
R-tree is a tree data structure used for spatial searching, i.e., for indexing multi-dimensional information such as geographical coordinates, rectangles or polygons. R-tree was proposed by Antonin Guttman in 1984 as an expansion of B-tree for multi-dimensional data and plays significant role in both theoretical and applied contexts. It is only one spatial index implemented in Boost::Geometry.
As a real application of this, It is often used to store spatial objects such as restaurant locations or the polygons which typical maps are made of: streets, buildings, outlines of lakes, coastlines, etc in order to perform a spatial query like “Find all stations within 1 km of my current location”, “Let me know all road segments in 2 km of my location” or “find the nearest gas station” which we often ask google seach by your voice recenlty. In this way, the R-tree can be used (nearest neighbor) search for some places.
You can find more explanations about R-tree in Wikipedia.
Now, we write a simple C++ wrapper class of rtree class in Boost::Geometry::Index that we can use in R.
The most important feature to mention here is the use of Rcpp module to expose your own class to R. Although almost all classes in Boost library have a lot of functions, , you do not use all in many cases. In that case, you should write your wrapper class for making your code simple.
// [[Rcpp::depends(BH)]]
// Enable C++11 via this plugin to suppress 'long long' errors
// [[Rcpp::plugins("cpp11")]]
#include <vector>
#include <Rcpp.h>
#include <boost/geometry.hpp>
#include <boost/geometry/geometries/point.hpp>
#include <boost/geometry/geometries/box.hpp>
#include <boost/geometry/index/rtree.hpp>
using namespace Rcpp;
// Mnemonics
namespace bg = boost::geometry;
namespace bgi = boost::geometry::index;
typedef bg::model::point<float, 2, bg::cs::cartesian> point_t;
typedef bg::model::box<point_t> box;
typedef std::pair<box, unsigned int> value_t;
class RTreeCpp {
public:
// Constructor.
// You have to give spatial data as a data frame.
RTreeCpp(DataFrame df) {
int size = df.nrows();
NumericVector id = df[0];
NumericVector bl_x = df[1];
NumericVector bl_y = df[2];
NumericVector ur_x = df[3];
NumericVector ur_y = df[4];
for(int i = 0; i < size; ++i) {
// create a box
box b(point_t(bl_x[i], bl_y[i]), point_t(ur_x[i], ur_y[i]));
// insert new value
rtree_.insert(std::make_pair(b, static_cast<unsigned int>(id[i])));
}
}
// This method(query) is k-nearest neighbor search.
// It returns some number of values nearest to some point(point argument) in space.
std::vector<int> knn(NumericVector point, unsigned int n) {
std::vector<value_t> result_n;
rtree_.query(bgi::nearest(point_t(point[0], point[1]), n), std::back_inserter(result_n));
std::vector<int> indexes;
std::vector<value_t>::iterator itr;
for ( itr = result_n.begin(); itr != result_n.end(); ++itr ) {
value_t value = *itr;
indexes.push_back( value.second );
}
return indexes;
}
private:
// R-tree can be created using various algorithm and parameters
// You can change the algorithm by template parameter.
// In this example we use quadratic algorithm.
// Maximum number of elements in nodes in R-tree is set to 16.
bgi::rtree<value_t, bgi::quadratic<16> > rtree_;
};
// [[Rcpp::export]]
std::vector<int> showKNN(Rcpp::DataFrame df, NumericVector point, unsigned int n) {
RTreeCpp tree(df); // recreate tree each time
return tree.knn(point, n);
}
First, we create a sample data set of spatial data.
# Sample spatial data(boxes)
points <- data.frame(
id=0:2,
bl_x=c(0, 2, 4),
bl_y=c(0, 2, 4),
ur_x=c(1, 3, 5),
ur_y=c(1, 3, 5))
/*
* To visually the data, we use the following code:
*/
size <- nrow(points)
#colors for rectangle area
colors <- rainbow(size)
#Plot these points
plot(c(0, 5), c(0, 5), type= "n", xlab="", ylab="")
for(i in 1:size){
rect(points[i, "bl_x"], points[i, "bl_y"], points[i, "ur_x"], points[i, "ur_y"], col=colors[i])
}
legend(4, 2, legend=points$id, fill=colors)
One can use the RTreeCpp class as follows:
# new RTreeCpp object
# Search nearest neighbor points(return value : id of points data)
showKNN(points, c(0, 0), 1)
[1] 0
showKNN(points, c(0, 0), 2)
[1] 1 0
showKNN(points, c(0, 0), 3)
[1] 2 1 0
Note the re-creation of the RTreeCpp
object is of course
inefficient, but the Rcpp Gallery imposes some constraints on how we
present code. For actual application a stateful and persistent
object would be created. This could be done via Rcpp Modules as
well a number of different ways. Here, however, we need to
recreate the object for each call as knitr
(which is used behind
the scenes) cannot persist objects between code chunks. This is
simply a technical limitation of the Rcpp Gallery—but not of Rcpp
itself.
We teach two software packages, R and SPSS, in Quantitative Methods 101 for psychology freshman at Bremen University (Germany). Sometimes confusion arises, when the software packages produce different results. This may be due to specifics in the implemention of a method or, as in most cases, to different default settings. One of these situations occurs when the QQ-plot is introduced. Below we see two QQ-plots, produced by SPSS and R, respectively. The data used in the plots were generated by:
set.seed(0) x <- sample(0:9, 100, rep=T)
SPSS
R
qqnorm(x, datax=T) # uses Blom's method by default qqline(x, datax=T)
There are some obvious differences:
To get a better understanding of the difference we will build the R and SPSS-flavored QQ-plot from scratch.
In order to calculate theoretical quantiles corresponding to the observed values, we first need to find a way to assign a probability to each value of the original data. A lot of different approaches exist for this purpose (for an overview see e.g. Castillo-Gutiérrez, Lozano-Aguilera, & Estudillo-Martínez, 2012b). They usually build on the ranks of the observed data points to calculate corresponding p-values, i.e. the plotting positions for each point. The qqnorm function uses two formulae for this purpose, depending on the number of observations (Blom’s mfethod, see ?qqnorm; Blom, 1958). With being the rank, for it will use the formula , for the formula to determine the probability value for each observation (see the help files for the functions qqnorm and ppoint). For simplicity reasons, we will only implement the case here.
n <- length(x) # number of observations r <- order(order(x)) # order of values, i.e. ranks without averaged ties p <- (r - 1/2) / n # assign to ranks using Blom's method y <- qnorm(p) # theoretical standard normal quantiles for p values plot(x, y) # plot empirical against theoretical values
Before we take at look at the code, note that our plot is identical to the plot generated by qqnorm above, except that the QQ-line is missing. The main point that makes the difference between R and SPSS is found in the command order(order(x)). The command calculates ranks for the observations using ordinal ranking. This means that all observations get different ranks and no average ranks are calculated for ties, i.e. for observations with equal values. Another approach would be to apply fractional ranking and calculate average values for ties. This is what the function rank does. The following codes shows the difference between the two approaches to assign ranks.
v <- c(1,1,2,3,3) order(order(v)) # ordinal ranking used by R
## [1] 1 2 3 4 5
rank(v) # fractional ranking used by SPSS
## [1] 1.5 1.5 3.0 4.5 4.5
R uses ordinal ranking and SPSS uses fractional ranking by default to assign ranks to values. Thus, the positions do not overlap in R as each ordered observation is assigned a different rank and therefore a different p-value. We will pick up the second approach again later, when we reproduce the SPSS-flavored plot in R.^{1}
The second difference between the plots concerned the scaling of the y-axis and was already clarified above.
The last point to understand is how the QQ-line is drawn in R. Looking at the probs argument of qqline reveals that it uses the 1st and 3rd quartile of the original data and theoretical distribution to determine the reference points for the line. We will draw the line between the quartiles in red and overlay it with the line produced by qqline to see if our code is correct.
plot(x, y) # plot empirical against theoretical values ps <- c(.25, .75) # reference probabilities a <- quantile(x, ps) # empirical quantiles b <- qnorm(ps) # theoretical quantiles lines(a, b, lwd=4, col="red") # our QQ line in red qqline(x, datax=T) # R QQ line
The reason for different lines in R and SPSS is that several approaches to fitting a straight line exist (for an overview see e.g. Castillo-Gutiérrez, Lozano-Aguilera, & Estudillo-Martínez, 2012a). Each approach has different advantages. The method used by R is more robust when we expect values to diverge from normality in the tails, and we are primarily interested in the normality of the middle range of our data. In other words, the method of fitting an adequate QQ-line depends on the purpose of the plot. An explanation of the rationale of the R approach can e.g. be found here.
The default SPSS approach also uses Blom’s method to assign probabilities to ranks (you may choose other methods is SPSS) and differs from the one above in the following aspects:
n <- length(x) # number of observations r <- rank(x) # a) ranks using fractional ranking (averaging ties) p <- (r - 1/2) / n # assign to ranks using Blom's method y <- qnorm(p) # theoretical standard normal quantiles for p values y <- y * sd(x) + mean(x) # b) transform SND quantiles to mean and sd from original data plot(x, y) # plot empirical against theoretical values
Lastly, let us add the line. As the scaling of both axes is the same, the line goes through the origin with a slope of .
abline(0,1) # c) slope 0 through origin
The comparison to the SPSS output shows that they are (visually) identical.
The whole point of this demonstration was to pinpoint and explain the differences between a QQ-plot generated in R and SPSS, so it will no longer be a reason for confusion. Note, however, that SPSS offers a whole range of options to generate the plot. For example, you can select the method to assign probabilities to ranks and decide how to treat ties. The plots above used the default setting (Blom’s method and averaging across ties). Personally I like the SPSS version. That is why I implemented the function qqnorm_spss in the ryouready package, that accompanies the course. The formulae for the different methods to assign probabilities to ranks can be found in Castillo-Gutiérrez et al. (2012b). The implentation is a preliminary version that has not yet been thoroughly tested. You can find the code here. Please report any bugs or suggestions for improvements (which are very welcome) in the github issues section.
library(devtools) install_github("markheckmann/ryouready") # install from github repo library(ryouready) # load package library(ggplot2) qq <- qqnorm_spss(x, method=1, ties.method="average") # Blom's method with averaged ties plot(qq) # generate QQ-plot ggplot(qq) # use ggplot2 to generate QQ-plot
The purpose of this gallery post is several fold:
sample()
function (see here)The application in this post uses an example from Jackman’s Bayesian Analysis for the Social Sciences (page 72) which now has a 30-year history in the Political Science (See Jackman for more references). The focus is on the extent to which the probability of revolution varies with facing a foreign threat or not. Facing a foreign threat is measured by “defeated …” or “not defeated …” over a span of 20 years. The countries come from in Latin America. During this period of time, there are only three revolutions: Bolivia (1952), Mexico (1910), and Nicaragua (1979).
Revolution | No Revolution | |
---|---|---|
Defeated and invaded or lost territory | 1 | 7 |
Not defeated for 20 years | 2 | 74 |
The goal is to learn about the true, unobservable probabilities of revolution given a recent defeat or the absence of one. That is, we care about
and
And, beyond that, we care about whether and differ.
These data are assumed to arise from a Binomial process, where the likelihood of the probability parameter value, , is
where is the total number of revolutions and non-revolutions and is the number of revolutions. The MLE for this model is just the sample proportion, so a Frequentist statistician would be wondering whether was sufficiently larger than to be unlikely to have happened by chance alone (given the null hypothesis that the two proportions were identical).
A Bayesian statistician could approach the question a bit more directly and compute the probability that To do this, we first need samples from the posterior distribution of and . In this post, we will get these samples via Sampling Importance Resampling.
Sampling Importance Resampling allows us to sample from the posterior distribution, where
by resampling from a series of draws from the prior, . Denote one of those draws from the prior distribution, , as . Then draw from the prior sample is drawn with replacement into the posterior sample with probability
We begin by drawing many samples from a series of prior distributions. Although using a prior Beta prior distribution on the parameter admits a closed-form solution, the point here is to demonstrate a simulation based approach. On the other hand, a Gamma prior distribution over is very much not conjugate and simulation is the best approach.
In particular, we will consider our posterior beliefs about the different in probabilities under five different prior distributions.
dfPriorInfo <- data.frame(id = 1:5,
dist = c("beta", "beta", "gamma", "beta", "beta"),
par1 = c(1, 1, 3, 10, .5),
par2 = c(1, 5, 20, 10, .5),
stringsAsFactors = FALSE)
dfPriorInfo
id dist par1 par2 1 1 beta 1.0 1.0 2 2 beta 1.0 5.0 3 3 gamma 3.0 20.0 4 4 beta 10.0 10.0 5 5 beta 0.5 0.5
Using the data frame dfPriorInfo
and the plyr
package, we will
draw a total of 20,000 values from each of the prior
distributions. This can be done in any number of ways and is
completely independent of using Rcpp for the SIR magic.
library("plyr")
MC1 <- 20000
dfPriors <- ddply(dfPriorInfo, "id",
.fun = (function(X) data.frame(draws = (do.call(paste("r", X$dist, sep = ""),
list(MC1, X$par1, X$par2))))))
However, we can confirm that our draws are as we expect and that we have the right number of them (5 * 20k = 100k).
head(dfPriors)
id draws 1 1 0.7124225 2 1 0.5910231 3 1 0.0595327 4 1 0.4718945 5 1 0.4485650 6 1 0.0431667
dim(dfPriors)
[1] 100000 2
Now, we write a C++ snippet that will create our R-level function to
generate a sample of D
values from the prior draws (prdraws
) given
their likelihood after the data (i.e., number of success – nsucc
,
number of failures – nfail
).
The most important feature to mention here is the use of some new and
improved extensions which effectively provide an equivalent,
performant mirror of R’s sample()
function at the
C++-level. Note that you need the RcppArmadillo 0.4.500.0 or newer for this
version of sample()
.
The return value of this function is a length D
vector of draws from
the posterior distribution given the draws from the prior distribution
where the likelihood is used as a filtering weight.
# include <RcppArmadilloExtensions/sample.h>
# include <RcppArmadilloExtensions/fixprob.h>
// [[Rcpp::depends(RcppArmadillo)]]
using namespace Rcpp ;
// [[Rcpp::export()]]
NumericVector samplePost (const NumericVector prdraws,
const int D,
const int nsucc,
const int nfail) {
int N = prdraws.size();
NumericVector wts(N);
for (int n = 0 ; n < N ; n++) {
wts(n) = pow(prdraws(n), nsucc) * pow(1 - prdraws(n), nfail);
}
RcppArmadillo::FixProb(wts, N, true);
NumericVector podraws = RcppArmadillo::sample(prdraws, D, true, wts);
return(podraws);
}
To use the samplePost()
function, we create the R representation
of the data as follows.
nS <- c(1, 2) # successes
nF <- c(7, 74) # failures
As a simple example, consider drawing a posterior sample of size 30 for the “defeated case” from discrete prior distribution with equal weight on the values of .125 (the MLE), .127, and .8. We see there is a mixture of .125 and .127 values, but no .8 values. values of .8 were simply to unlikely (given the likelihood) to be resampled from the prior.
table(samplePost(c(.125, .127, .8), 30, nS[1], nF[1]))
0.125 0.127 9 21
Again making use of the plyr package, we construct samples of size
20,000 for both and under each of the 5
prior distribution samples. These posterior draws are stored in the
data frame dfPost
.
MC2 <- 20000
f1 <- function(X) {
draws <- X$draws
t1 <- samplePost(draws, MC2, nS[1], nF[1])
t2 <- samplePost(draws, MC2, nS[2], nF[2])
return(data.frame(theta1 = t1, theta2 = t2))
}
dfPost <- ddply(dfPriors, "id", f1)
head(dfPost)
id theta1 theta2 1 1 0.3067334 0.0130865 2 1 0.1421879 0.0420830 3 1 0.3218130 0.0634511 4 1 0.0739756 0.0363466 5 1 0.1065267 0.0460336 6 1 0.0961749 0.0440790
dim(dfPost)
[1] 100000 3
Here, we are visualizing the posterior draws for the quantity of interest — the difference in probabilities of revolution. These posterior draws are grouped according to the prior distribution used. A test of whether revolution is more likely given a foreign threat is operationalized by the probability that is positive. This probability for each distribution is shown in white. For all choices of the prior here, the probability that “foreign threat matters” exceeds .90.
The full posterior distribution of is shown for each of the five priors in blue. A solid, white vertical band indicates “no effect”. In all cases. the majority of the mass is clearly to the right of this band.
Recall that the priors are, themselves, over the individual revolution probabilities, and . The general shape of each of these prior distributions of the parameter is shown in a grey box by the white line. For example, is actually a uniform distribution over the parameter space, . On the other hand, has most of its mass at the two tails.
At least across these specifications of the prior distributions on , the conclusion that “foreign threats matter” finds a good deal of support. What is interesting about this application is that despite these distributions over the difference in probabilities, the p-value associated with Fisher’s Exact Test for 2 x 2 tables is just .262.
by Zachary Deane-Mayer (noreply@blogger.com) at October 20, 2014 04:24 PM
Users new to the Rcpp family of functionality are often impressed with the performance gains that can be realized, but struggle to see how to approach their own computational problems. Many of the most impressive performance gains are demonstrated with seemingly advanced statistical methods, advanced C++–related constructs, or both. Even when users are able to understand how various demonstrated features operate in isolation, examples may implement too many at once to seem accessible.
The point of this Gallery article is to offer an example application that performs well (thanks to the Rcpp family) but has reduced statistical and programming overhead for some users. In addition, rather than simply presenting the final product, the development process is explicitly documented and narrated.
As an example, we will consider estimating the parameters the standard Probit regression model given by
where and are length vectors and the presence of an “intercept” term is absorbed into if desired.
The analyst only has access to a censored version of , namely where the subscript denotes the th observation.
As is common, the censoring is assumed to generate if and otherwise. When we assume , the problem is just the Probit regression model loved by all.
To make this concrete, consider a model of voter turnout using the dataset provided by the Zelig R package.
library("Zelig")
data("turnout")
head(turnout)
race age educate income vote 1 white 60 14 3.3458 1 2 white 51 10 1.8561 0 3 white 24 12 0.6304 0 4 white 38 8 3.4183 1 5 white 25 12 2.7852 1 6 white 67 12 2.3866 1
dim(turnout)
[1] 2000 5
Our goal will be to estimate the parameters associated with the variables income, educate, and age. Since there is nothing special about this dataset, standard methods work perfectly well.
fit0 <- glm(vote ~ income + educate + age,
data = turnout,
family = binomial(link = "probit")
)
fit0
Call: glm(formula = vote ~ income + educate + age, family = binomial(link = "probit"), data = turnout) Coefficients: (Intercept) income educate age -1.6824 0.0994 0.1067 0.0169 Degrees of Freedom: 1999 Total (i.e. Null); 1996 Residual Null Deviance: 2270 Residual Deviance: 2030 AIC: 2040
Using fit0
as our baseline, the question is how can we recover these estimates
with an Rcpp-based approach. One answer is implement the EM-algorithm in C++
snippets that can be processed into R-level functions; that’s what we will
do. (Think of this as a Probit regression analog to
the linear regression example — but with fewer features.)
For those unfamiliar with the EM algorithm, consider the Wikipedia article and a denser set of Swarthmore lecture notes.
The intuition behind this approach begins by noticing that if mother nature revealed the values, we would simply have a linear regression problem and focus on
where the meaning of the matrix notation is assumed.
Because mother nature is not so kind, we have to impute the values. For a given guess of , due to our distributional assumptions about we know that
and
where .
By iterating through these two steps we can eventually recover the desired parameter estimates:
To demonstrate implementation of the EM algorithm for a Probit regression model using Rcpp-provided functionality we consider a series of steps.
These are:
These steps are not chosen because each produces useful output (from the perspective of parameters estimation), but because they mirror milestones in a development process that benefits new users: only small changes are made at a time.
To begin, we prepare our R-level data for passage to our eventual C++-based functions.
mY <- matrix(turnout$vote)
mX <- cbind(1,
turnout$income,
turnout$educate,
turnout$age
)
The first milestone will be to mock up a function em1
that is exported to
create an R-level function of the same name. The key features here are that we
have defined the function to
- accept arguments corresponding to likely inputs
- create containers for the to-be-computed values,
- outline the main loop of the code for the EM iterations, and
- return various values of interest in a list
Users new to the Rcpp process will benefit from return List
objects in the
beginning. They allow you rapidly return new and different values to the R-level
for inspection.
# include <RcppArmadillo.h>
// [[Rcpp::depends(RcppArmadillo)]]
using namespace Rcpp ;
// [[Rcpp::export()]]
List em1 (const arma::mat y,
const arma::mat X,
const int maxit = 10
) {
// inputs
const int N = y.n_rows ;
const int K = X.n_cols ;
// containers
arma::mat beta(K, 1) ;
beta.fill(0.0) ; // initialize betas to 1
arma::mat eystar(N, 1) ;
eystar.fill(0) ;
// algorithm
for (int it = 0 ; it < maxit ; it++) { // EM iterations
// NEXT STEP: implement algorithm
}
// returns
List ret ;
ret["N"] = N ;
ret["K"] = K ;
ret["beta"] = beta ;
ret["eystar"] = eystar ;
return(ret) ;
}
We know that this code does not produce estimates of anything. Indeed, that is
by design. Neither the beta
nor eystar
elements of the returned list
are
ever updated after they are initialized to 0.
However, we can see that much of the administrative work for a working implementation is complete.
fit1 <- em1(y = mY,
X = mX,
maxit = 20
)
fit1$beta
[,1] [1,] 0 [2,] 0 [3,] 0 [4,] 0
head(fit1$eystar)
[,1] [1,] 0 [2,] 0 [3,] 0 [4,] 0 [5,] 0 [6,] 0
Having verified that input data structures and output data structures are “working” as expected, we turn to updating the values.
Updates to the values are different depending on whether or . Rather than worrying about correctly imputing the unobserved propensities, we will use dummy values of 1 and -1 as placeholders. Instead, the focus is on building on the necessary conditional structure of the code and looping through the update step for every observation.
Additionally, at the end of each imputation step (the E in EM) we update the estimate with the least squares estimate (the M in EM).
# include <RcppArmadillo.h>
// [[Rcpp::depends(RcppArmadillo)]]
using namespace Rcpp ;
// [[Rcpp::export()]]
List em2 (const arma::mat y,
const arma::mat X,
const int maxit = 10
) {
// inputs
const int N = y.n_rows ;
const int K = X.n_cols ;
// containers
arma::mat beta(K, 1) ;
beta.fill(0.0) ; // initialize betas to 0
arma::mat eystar(N, 1) ;
eystar.fill(0) ;
// algorithm
for (int it = 0 ; it < maxit ; it++) {
arma::mat mu = X * beta ;
// augmentation step
for (int n = 0 ; n < N ; n++) {
if (y(n, 0) == 1) { // y = 1
// NEXT STEP: fix augmentation
eystar(n, 0) = 1 ;
}
if (y(n, 0) == 0) { // y = 0
// NEXT STEP: fix augmentation
eystar(n, 0) = -1 ;
}
}
// maximization step
beta = (X.t() * X).i() * X.t() * eystar ;
}
// returns
List ret ;
ret["N"] = N ;
ret["K"] = K ;
ret["beta"] = beta ;
ret["eystar"] = eystar ;
return(ret) ;
}
This code, like that in Attempt 1, is syntactically fine. But, as we know, the
update step is very wrong. However, we can see that the updates are happening as
we’d expect and we see non-zero returns for the beta
element and the eystar
element.
fit2 <- em2(y = mY,
X = mX,
maxit = 20
)
fit2$beta
[,1] [1,] -0.816273 [2,] 0.046065 [3,] 0.059481 [4,] 0.009085
head(fit2$eystar)
[,1] [1,] 1 [2,] -1 [3,] -1 [4,] 1 [5,] 1 [6,] 1
With the final logical structure of the code built out, we will now correct the
data augmentation. Specifically, we replace the assignment of 1 and -1 with the
expectation of the unobservable values . Rather than muddy our EM
function (em3()
) with further arithmetic, we sample call the C++ level
functions f()
and g()
which were included prior to our definition of
em3()
.
But, since these are just utility functions needed internally by em3()
, they
are not tagged to be exported (via // [[Rcpp::export()]]
) to the R level.
As it stands, this is a correct implementation (although there is room for improvement).
# include <RcppArmadillo.h>
// [[Rcpp::depends(RcppArmadillo)]]
using namespace Rcpp ;
double f (double mu) {
double val = ((R::dnorm(-mu, 0, 1, false)) /
(1 - R::pnorm(-mu, 0, 1, true, false))
) ;
return(val) ;
}
double g (double mu) {
double val = ((R::dnorm(-mu, 0, 1, false)) /
(R::pnorm(-mu, 0, 1, true, false))
) ;
return(val) ;
}
// [[Rcpp::export()]]
List em3 (const arma::mat y,
const arma::mat X,
const int maxit = 10
) {
// inputs
const int N = y.n_rows ;
const int K = X.n_cols ;
// containers
arma::mat beta(K, 1) ;
beta.fill(0.0) ; // initialize betas to 0
arma::mat eystar(N, 1) ;
eystar.fill(0) ;
// algorithm
for (int it = 0 ; it < maxit ; it++) {
arma::mat mu = X * beta ;
// augmentation step
// NEXT STEP: parallelize augmentation step
for (int n = 0 ; n < N ; n++) {
if (y(n, 0) == 1) { // y = 1
eystar(n, 0) = mu(n, 0) + f(mu(n, 0)) ;
}
if (y(n, 0) == 0) { // y = 0
eystar(n, 0) = mu(n, 0) - g(mu(n, 0)) ;
}
}
// maximization step
beta = (X.t() * X).i() * X.t() * eystar ;
}
// returns
List ret ;
ret["N"] = N ;
ret["K"] = K ;
ret["beta"] = beta ;
ret["eystar"] = eystar ;
return(ret) ;
}
fit3 <- em3(y = mY,
X = mX,
maxit = 100
)
head(fit3$eystar)
[,1] [1,] 1.3910 [2,] -0.6599 [3,] -0.7743 [4,] 0.8563 [5,] 0.9160 [6,] 1.2677
Second, notice that this output is identical to the parameter estimates (the
object fit0
) from our R level call to the glm()
function.
fit3$beta
[,1] [1,] -1.68241 [2,] 0.09936 [3,] 0.10667 [4,] 0.01692
fit0
Call: glm(formula = vote ~ income + educate + age, family = binomial(link = "probit"), data = turnout) Coefficients: (Intercept) income educate age -1.6824 0.0994 0.1067 0.0169 Degrees of Freedom: 1999 Total (i.e. Null); 1996 Residual Null Deviance: 2270 Residual Deviance: 2030 AIC: 2040
With a functional implementation complete as em3()
, we know turn to the second
order concern: performance. The time required to evaluate our function can be
reduced from the perspective of a user sitting at a computer with idle cores.
Although the small size of these data don’t necessitate parallelization, the E step is a natural candidate for being parallelized. Here, the parallelization relies on OpenMP. See here for other examples of combining Rcpp and OpenMP or here for a different approach.
Sys.setenv("PKG_CXXFLAGS" = "-fopenmp")
Sys.setenv("PKG_LIBS" = "-fopenmp")
Aside from some additional compiler flags, the changes to our new implementation
in em4()
are minimal. They are:
for
loop for parallelization with a #pragma
# include <RcppArmadillo.h>
# include <omp.h>
// [[Rcpp::depends(RcppArmadillo)]]
using namespace Rcpp ;
double f (double mu) {
double val = ((R::dnorm(-mu, 0, 1, false)) /
(1 - R::pnorm(-mu, 0, 1, true, false))
) ;
return(val) ;
}
double g (double mu) {
double val = ((R::dnorm(-mu, 0, 1, false)) /
(R::pnorm(-mu, 0, 1, true, false))
) ;
return(val) ;
}
// [[Rcpp::export()]]
List em4 (const arma::mat y,
const arma::mat X,
const int maxit = 10,
const int nthr = 1
) {
// inputs
const int N = y.n_rows ;
const int K = X.n_cols ;
omp_set_num_threads(nthr) ;
// containers
arma::mat beta(K, 1) ;
beta.fill(0.0) ; // initialize betas to 0
arma::mat eystar(N, 1) ;
eystar.fill(0) ;
// algorithm
for (int it = 0 ; it < maxit ; it++) {
arma::mat mu = X * beta ;
// augmentation step
#pragma omp parallel for
for (int n = 0 ; n < N ; n++) {
if (y(n, 0) == 1) { // y = 1
eystar(n, 0) = mu(n, 0) + f(mu(n, 0)) ;
}
if (y(n, 0) == 0) { // y = 0
eystar(n, 0) = mu(n, 0) - g(mu(n, 0)) ;
}
}
// maximization step
beta = (X.t() * X).i() * X.t() * eystar ;
}
// returns
List ret ;
ret["N"] = N ;
ret["K"] = K ;
ret["beta"] = beta ;
ret["eystar"] = eystar ;
return(ret) ;
}
This change should not (and does not) result in any change to the calculations being done. However, if our algorithm involved random number generation, great care would need to be taken to ensure our results were reproducible.
fit4 <- em4(y = mY,
X = mX,
maxit = 100
)
identical(fit4$beta, fit3$beta)
[1] TRUE
Finally, we can confirm that our parallelization was “successful”. Again, because there is really no need to parallelize this code, performance gains are modest. But, that it indeed runs faster is clear.
library("microbenchmark")
microbenchmark(seq = (em3(y = mY,
X = mX,
maxit = 100
)
),
par = (em4(y = mY,
X = mX,
maxit = 100,
nthr = 4
)
),
times = 20
)
Unit: milliseconds expr min lq mean median uq max neval cld seq 32.94 33.01 33.04 33.03 33.07 33.25 20 b par 11.16 11.20 11.35 11.26 11.29 13.16 20 a
The purpose of this lengthy gallery post is neither to demonstrate new functionality nor the computational feasibility of cutting-edge algorithms. Rather, it is to explicitly walk through a development process similar that which new users can benefit from using while using a very common statistical problem, Probit regression.
Today I stumbled across a figure in an explanation on multiple factor analysis which contained pictograms.
Figure 1 from Abdi & Valentin (2007), p. 8.
I wanted to reproduce a similar figure in R using pictograms and additionally color them e.g. by group membership . I have almost no knowledge about image processing, so I tried out several methods of how to achieve what I want. The first thing I did was read in an PNG file and look at the data structure. The package png allows to read in PNG files. Note that all of the below may not work on Windows machines, as it does not support semi-transparency (see ?readPNG).
library(png) img <- readPNG(system.file("img", "Rlogo.png", package="png")) class(img)
## [1] "array"
dim(img)
## [1] 76 100 4
The object is a numerical array with four layers (red, green, blue, alpha; short RGBA). Let’s have a look at the first layer (red) and replace all non-zero entries by a one and the zeros by a dot. This will show us the pattern of non-zero values and we already see the contours.
l4 <- img[,,1] l4[l4 > 0] <- 1 l4[l4 == 0] <- "." d <- apply(l4, 1, function(x) { cat(paste0(x, collapse=""), "\n") })
To display the image in R one way is to raster the image (i.e. the RGBA layers are collapsed into a layer of single HEX value) and print it using rasterImage.
rimg <- as.raster(img) # raster multilayer object r <- nrow(rimg) / ncol(rimg) # image ratio plot(c(0,1), c(0,r), type = "n", xlab = "", ylab = "", asp=1) rasterImage(rimg, 0, 0, 1, r)
Let’s have a look at a small part the rastered image object. It is a matrix of HEX values.
rimg[40:50, 1:6]
## [1,] "#C4C5C202" "#858981E8" "#838881FF" "#888D86FF" "#8D918AFF" "#8F938CFF" ## [2,] "#00000000" "#848881A0" "#80847CFF" "#858A83FF" "#898E87FF" "#8D918BFF" ## [3,] "#00000000" "#8B8E884C" "#7D817AFF" "#82867EFF" "#868B84FF" "#8A8E88FF" ## [4,] "#00000000" "#9FA29D04" "#7E827BE6" "#7E817AFF" "#838780FF" "#878C85FF" ## [5,] "#00000000" "#00000000" "#81857D7C" "#797E75FF" "#7F827BFF" "#838781FF" ## [6,] "#00000000" "#00000000" "#898C8510" "#787D75EE" "#797E76FF" "#7F837BFF" ## [7,] "#00000000" "#00000000" "#00000000" "#7F837C7B" "#747971FF" "#797E76FF" ## [8,] "#00000000" "#00000000" "#00000000" "#999C9608" "#767C73DB" "#747971FF" ## [9,] "#00000000" "#00000000" "#00000000" "#00000000" "#80847D40" "#71766EFD" ## [10,] "#00000000" "#00000000" "#00000000" "#00000000" "#00000000" "#787D7589" ## [11,] "#00000000" "#00000000" "#00000000" "#00000000" "#00000000" "#999C9604"
And print this small part.
plot(c(0,1), c(0,.6), type = "n", xlab = "", ylab = "", asp=1) rasterImage(rimg[40:50, 1:6], 0, 0, 1, .6)
Now we have an idea of how the image object and the rastered object look like from the inside. Let’s start to modify the images to suit our needs.
In order to change the color of the pictograms, my first idea was to convert the graphics to greyscale and remap the values to a color ramp of may choice. To convert to greyscale there are tons of methods around (see e.g. here). I just pick one of them I found on SO by chance. With R=Red, G=Green and B=Blue we have
brightness = sqrt(0.299 * R^2 + 0.587 * G^2 + 0.114 * B^2)
This approach modifies the PNG files after they have been coerced into a raster object.
# function to calculate brightness values brightness <- function(hex) { v <- col2rgb(hex) sqrt(0.299 * v[1]^2 + 0.587 * v[2]^2 + 0.114 * v[3]^2) /255 } # given a color ramp, map brightness to ramp also taking into account # the alpha level. The defaul color ramp is grey # img_to_colorramp <- function(img, ramp=grey) { cv <- as.vector(img) b <- sapply(cv, brightness) g <- ramp(b) a <- substr(cv, 8,9) # get alpha values ga <- paste0(g, a) # add alpha values to new colors img.grey <- matrix(ga, nrow(img), ncol(img), byrow=TRUE) } # read png and modify img <- readPNG(system.file("img", "Rlogo.png", package="png")) img <- as.raster(img) # raster multilayer object r <- nrow(img) / ncol(img) # image ratio s <- 3.5 # size plot(c(0,10), c(0,3.5), type = "n", xlab = "", ylab = "", asp=1) rasterImage(img, 0, 0, 0+s/r, 0+s) # original img2 <- img_to_colorramp(img) # modify using grey scale rasterImage(img2, 5, 0, 5+s/r, 0+s)
Great, it works! Now Let’s go and try out some other color palettes using colorRamp to create a color ramp.
plot(c(0,10),c(0,8.5), type = "n", xlab = "", ylab = "", asp=1) img1 <- img_to_colorramp(img) rasterImage(img1, 0, 5, 0+s/r, 5+s) reds <- function(x) rgb(colorRamp(c("darkred", "white"))(x), maxColorValue = 255) img2 <- img_to_colorramp(img, reds) rasterImage(img2, 5, 5, 5+s/r, 5+s) greens <- function(x) rgb(colorRamp(c("darkgreen", "white"))(x), maxColorValue = 255) img3 <- img_to_colorramp(img, greens) rasterImage(img3, 0, 0, 0+s/r, 0+s) single_color <- function(...) "#0000BB" img4 <- img_to_colorramp(img, single_color) rasterImage(img4, 5, 0, 5+s/r, 0+s)
Okay, that basically does the job. Now we will apply it to the wine pictograms.
Let’s use this wine glass from Wikimedia Commons. It’s quite big so I uploaded a reduced size version to imgur . We will use it for our purposes.
# load file from web f <- tempfile() download.file("http://i.imgur.com/A14ntCt.png", f) img <- readPNG(f) img <- as.raster(img) r <- nrow(img) / ncol(img) s <- 1 # let's create a function that returns a ramp function to save typing ramp <- function(colors) function(x) rgb(colorRamp(colors)(x), maxColorValue = 255) # create dataframe with coordinates and colors set.seed(1) x <- data.frame(x=rnorm(16, c(2,2,4,4)), y=rnorm(16, c(1,3)), colors=c("black", "darkred", "garkgreen", "darkblue")) plot(c(1,6), c(0,5), type="n", xlab="", ylab="", asp=1) for (i in 1L:nrow(x)) { colorramp <- ramp(c(x[i,3], "white")) img2 <- img_to_colorramp(img, colorramp) rasterImage(img2, x[i,1], x[i,2], x[i,1]+s/r, x[i,2]+s) }
Another approach would be to modifying the RGB layers before rastering to HEX values.
img <- readPNG(system.file("img", "Rlogo.png", package="png")) img2 <- img img[,,1] <- 0 # remove Red component img[,,2] <- 0 # remove Green component img[,,3] <- 1 # Set Blue to max img <- as.raster(img) r <- nrow(img) / ncol(img) # size ratio s <- 3.5 # size plot(c(0,10), c(0,3.5), type = "n", xlab = "", ylab = "", asp=1) rasterImage(img, 0, 0, 0+s/r, 0+s) img2[,,1] <- 1 # Red to max img2[,,2] <- 0 img2[,,3] <- 0 rasterImage(as.raster(img2), 5, 0, 5+s/r, 0+s)
To just colorize the image, we could weight each layer.
# wrap weighting into function weight_layers <- function(img, w) { for (i in seq_along(w)) img[,,i] <- img[,,i] * w[i] img } plot(c(0,10), c(0,3.5), type = "n", xlab = "", ylab = "", asp=1) img <- readPNG(system.file("img", "Rlogo.png", package="png")) img2 <- weight_layers(img, c(.2, 1,.2)) rasterImage(img2, 0, 0, 0+s/r, 0+s) img3 <- weight_layers(img, c(1,0,0)) rasterImage(img3, 5, 0, 5+s/r, 0+s)
After playing around and hard-coding the modifications I started to search and found the EBimage package which has a lot of features for image processing that make ones life (in this case only a bit) easier.
library(EBImage) f <- system.file("img", "Rlogo.png", package="png") img <- readImage(f) img2 <- img img[,,2] = 0 # zero out green layer img[,,3] = 0 # zero out blue layer img <- as.raster(img) img2[,,1] = 0 img2[,,3] = 0 img2 <- as.raster(img2) r <- nrow(img) / ncol(img) s <- 3.5 plot(c(0,10), c(0,3.5), type = "n", xlab = "", ylab = "", asp=1) rasterImage(img, 0, 0, 0+s/r, 0+s) rasterImage(img2, 5, 0, 5+s/r, 0+s)
EBImage is a good choice and fairly easy to handle. Now let’s again print the pictograms.
f <- tempfile(fileext=".png") download.file("http://i.imgur.com/A14ntCt.png", f) img <- readImage(f) # will replace whole image layers by one value # only makes sense if there is a alpha layer that # gives the contours # mod_color <- function(img, col) { v <- col2rgb(col) / 255 img = channel(img, 'rgb') img[,,1] = v[1] # Red img[,,2] = v[2] # Green img[,,3] = v[3] # Blue as.raster(img) } r <- nrow(img) / ncol(img) # get image ratio s <- 1 # size # create random data set.seed(1) x <- data.frame(x=rnorm(16, c(2,2,4,4)), y=rnorm(16, c(1,3)), colors=1:4) # plot pictograms plot(c(1,6), c(0,5), type="n", xlab="", ylab="", asp=1) for (i in 1L:nrow(x)) { img2 <- mod_color(img, x[i, 3]) rasterImage(img2, x[i,1], x[i,2], x[i,1]+s*r, x[i,2]+s) }
Note, that above I did not bother to center each pictogram to position it correctly. This still needs to be done. Anyway, that’s it! Mission completed.
Abdi, H., & Valentin, D. (2007). Multiple factor analysis (MFA). In N. Salkind (Ed.), Encyclopedia of Measurement and Statistics (pp. 1–14). Thousand Oaks, CA: Sage Publications. Retrieved from https://www.utdallas.edu/~herve/Abdi-MFA2007-pretty.pdf
I've been putting off sharing this idea because I've heard the rumors about what happens to folks who aren't security experts when they post about security on the internet. If this blog is replaced with cat photos and rainbows, you'll know what happened.
It's 2014 and chances are you have accounts on websites that are not properly handling user passwords. I did no research to produce the following list of ways passwords are mishandled in decreasing order of frequency:
SHA1(salt + plain-password)
.We know that sites should be generating secure random salts and using an established slow hashing algorithm (bcrypt, scrypt, or PBKDF2). Why are sites not doing this?
While security issues deserve a top spot on any site's priority list, new features often trump addressing legacy security concerns. The immediacy of the risk is hard to quantify and it's easy to fall prey to a "nothing bad has happened yet, why should we change now" attitude. It's easy for other bugs, features, or performance issues to win out when measured by immediate impact. Fixing security or other "legacy" issues is the Right Thing To Do and often you will see no measurable benefit from the investment. It's like having insurance. You don't need it until you do.
Specific to the improper storage of user password data is the issue of the impact to a site imposed by upgrading. There are two common approaches to upgrading password storage. You can switch cold turkey to the improved algorithms and force password resets on all of your users. Alternatively, you can migrate incrementally such that new users and any user who changes their password gets the increased security.
The cold turkey approach is not a great user experience and sites might choose to delay an upgrade to avoid admitting to a weak security implementation and disrupting their site by forcing password resets.
The incremental approach is more appealing, but the security benefit is drastically diminished for any site with a substantial set of existing users.
Given the above migration choices, perhaps it's (slightly) less surprising that businesses choose to prioritize other work ahead of fixing poorly stored user password data.
What if you could upgrade a site so that both new and existing users immediately benefited from the increased security, but without the disruption of password resets? It turns out that you can and it isn't very hard.
Consider a user table with columns:
userid
salt
hashed_pass
Where the hashed_pass
column is computed using a weak fast
algorithm, for example SHA1(salt + plain_pass)
.
The core of the idea is to apply a proper algorithm on top of the data
we already have. I'll use bcrypt
to make the discussion
concrete. Add columns to the user table as follows:
userid
salt
hashed_pass
hash_type
salt2
Process the existing user table by computing bcrypt(salt2 +
hashed_pass)
and storing the result in the hashed_pass
column
(overwriting the less secure value); save the new salt value to
salt2
and set hash_type
to bycrpt+sha1
.
To verify a user where hash_type
is bcrypt+sha1
, compute
bcrypt(salt2 + SHA1(salt + plain_pass))
and compare to the
hashed_pass
value. Note that bcrypt implementations encode the salt
as a prefix of the hashed value so you could avoid the salt2
column,
but it makes the idea easier to explain to have it there.
You can take this approach further and have any user that logs in (as
well as new users) upgrade to a "clean" bcrypt only algorithm since
you can now support different verification algorithms using
hash_type
. With the proper application code changes in place, the
upgrade can be done live.
This scheme will also work for sites storing non-salted password hashes as well as those storing plain text passwords (THE HORROR).
Perhaps this approach makes implementing a password storage security upgrade more palatable and more likely to be prioritized. And if there's a horrible flaw in this approach, maybe you'll let me know without turning this blog into a tangle of cat photos and rainbows.
If you use rebar to generate an OTP release project and want to
have reproducible builds, you need the rebar_lock_deps_plugin
plugin. The plugin provides a lock-deps
command that will generate a
rebar.config.lock
file containing the complete flattened set of
project dependencies each pegged to a git SHA. The lock file acts
similarly to Bundler's Gemfile.lock
file and allows for reproducible
builds (*).
Without lock-deps
you might rely on the discipline of using a tag
for all of your application's deps. This is insufficient if any dep
depends on something not specified as a tag. It can also be a problem
if a third party dep doesn't provide a tag. Generating a
rebar.config.lock
file solves these issues. Moreover, using
lock-deps
can simplify the work of putting together a release
consisting of many of your own repos. If you treat the master branch
as shippable, then rather than tagging each subproject and updating
rebar.config
throughout your project's dependency chain, you can
run get-deps
(without the lock file), compile
, and re-lock at the
latest versions throughout your project repositories.
The reproducibility of builds when using lock-deps
depends on the
SHAs captured in rebar.config.lock
. The plugin works by scanning the
cloned repos in your project's deps
directory and extracting the
current commit SHA. This works great until a repository's history is
rewritten with a force push. If you really want reproducible builds,
you need to not nuke your SHAs and you'll need to fork all third party
repos to ensure that someone else doesn't screw you over in this
fashion either. If you make a habit of only depending on third party
repos using a tag, assume that upstream maintainers are not completely
bat shit crazy, and don't force push your master branch, then you'll
probably be fine.
Install the plugin in your project by adding the following to your
rebar.config
file:
%% Plugin dependency
{deps, [
{rebar_lock_deps_plugin, ".*",
{git, "git://github.com/seth/rebar_lock_deps_plugin.git", {branch, "master"}}}
]}.
%% Plugin usage
{plugins, [rebar_lock_deps_plugin]}.
To test it out do:
rebar get-deps
# the plugin has to be compiled so you can use it
rebar compile
rebar lock-deps
If you'd like to take a look at a project that uses the plugin, take a look at CHEF's erchef project.
If you are building an OTP release project using rebar generate
then
you can use rebar_lock_deps_plugin
to enhance your build experience
in three easy steps.
Use rebar bump-rel-version version=$BUMP
to automate the process
of editing rel/reltool.config
to update the release version. The
argument $BUMP
can be major
, minor
, or patch
(default) to
increment the specified part of a semver X.Y.Z
version. If
$BUMP
is any other value, it is used as the new version
verbatim. Note that this function rewrites rel/reltool.config
using ~p
. I check-in the reformatted version and maintain the
formatting when editing. This way, the general case of a version
bump via bump-rel-version
results in a minimal diff.
Autogenerate a change summary commit message for all project
deps. Assuming you've generated a new lock file and bumped the
release version, use rebar commit-release
to commit the changes
to rebar.config.lock
and rel/reltool.config
with a commit
message that summarizes the changes made to each dependency between
the previously locked version and the newly locked version. You can
get a preview of the commit message via rebar log-changed-deps
.
Finally, create an annotated tag for your new release with rebar
tag-release
which will read the current version from
rel/reltool.config
and create an annotated tag named with the
version.
Up to version 2.0.1 of rebar_lock_deps_plugin
, the dependencies in
the generated lock file were ordered alphabetically. This was a
side-effect of using filelib:wildcard/1
to list the dependencies in
the top-level deps
directory. In most cases, the order of the full
dependency set does not matter. However, if some of the code in your
project uses parse transforms, then it will be important for the parse
transform to be compiled and on the code path before attempting to
compile code that uses the parse transform.
This issue was recently discovered by a colleague who ran into build
issues using the lock file for a project that had recently integrated
lager for logging. He came up with the idea of maintaining the
order of deps as they appear in the various rebar.config
files along
with a prototype patch proving out the idea. As of
rebar_lock_deps_plugin
3.0.0, the lock-deps
command will (mostly)
maintain the relative order of dependencies as found in the
rebar.config
files.
The "mostly" is that when a dep is shared across two subprojects, it
will appear in the expected order for the first subproject (based on
the ordering of the two subprojects). The deps for the second
subproject will not be in strict rebar.config
order, but the
resulting order should address any compile-time dependencies and be
relatively stable (only changing when project deps alter their deps
with larger impact when shared deps are introduced or removed).
There are times, as a programmer, when a real-world problem looks like a text book exercise (or an interview whiteboard question). Just the other day at work we had to design some manhole covers, but I digress.
Fixing the order of the dependencies in the generated lock file is
(nearly) the same as finding an install order for a set of projects
with inter-dependencies. I had some fun coding up the text book
solution even though the approach doesn't handle the constraint of
respecting the order provided by the rebar.config
files. Onward
with the digression.
We have a set of "packages" where some packages depend on others and we want to determine an install order such that a package's dependencies are always installed before the package. The set of packages and the relation "depends on" form a directed acyclic graph or DAG. The topological sort of a DAG produces an install order for such a graph. The ordering is not unique. For example, with a single package C depending on A and B, valid install orders are [A, B, C] and [B, A, C].
To setup the problem, we load all of the project dependency
information into a proplist mapping each package to a list of its
dependencies extracted from the package's rebar.config
file.
read_all_deps(Config, Dir) ->
TopDeps = rebar_config:get(Config, deps, []),
Acc = [{top, dep_names(TopDeps)}],
DepDirs = filelib:wildcard(filename:join(Dir, "*")),
Acc ++ [
{filename:basename(D), dep_names(extract_deps(D))}
|| D <- DepDirs ].
Erlang's standard library provides the digraph and
digraph_utils modules for constructing and operating on directed
graphs. The digraph_utils
module includes a topsort/1
function
which we can make use of for our "exercise". The docs say:
Returns a topological ordering of the vertices of the digraph Digraph if such an ordering exists, false otherwise. For each vertex in the returned list, there are no out-neighbours that occur earlier in the list.
To figure out which way to point the edges when building our graph,
consider two packages A and B with A depending on B. We know we want
to end up with an install order of [B, A]. Rereading the topsort/1
docs, we must want an edge B => A
. With that, we can build our DAG
and obtain an install order with the topological sort:
load_digraph(Config, Dir) ->
AllDeps = read_all_deps(Config, Dir),
G = digraph:new(),
Nodes = all_nodes(AllDeps),
[ digraph:add_vertex(G, N) || N <- Nodes ],
%% If A depends on B, then we add an edge A <= B
[
[ digraph:add_edge(G, Dep, Item)
|| Dep <- DepList ]
|| {Item, DepList} <- AllDeps, Item =/= top ],
digraph_utils:topsort(G).
%% extract a sorted unique list of all deps
all_nodes(AllDeps) ->
lists:usort(lists:foldl(fun({top, L}, Acc) ->
L ++ Acc;
({K, L}, Acc) ->
[K|L] ++ Acc
end, [], AllDeps)).
The digraph
module manages graphs using ETS giving it a convenient
API, though one that feels un-erlang-y in its reliance on
side-effects.
The above gives an install order, but doesn't take into account the
relative order of deps as specified in the rebar.config
files. The
solution implemented in the plugin is a bit less fancy, recursing over
the deps and maintaining the desired ordering. The only tricky bit
being that shared deps are ignored until the end and the entire
linearized list is de-duped which required a . Here's the code:
order_deps(AllDeps) ->
Top = proplists:get_value(top, AllDeps),
order_deps(lists:reverse(Top), AllDeps, []).
order_deps([], _AllDeps, Acc) ->
de_dup(Acc);
order_deps([Item|Rest], AllDeps, Acc) ->
ItemDeps = proplists:get_value(Item, AllDeps),
order_deps(lists:reverse(ItemDeps) ++ Rest, AllDeps, [Item | Acc]).
de_dup(AccIn) ->
WithIndex = lists:zip(AccIn, lists:seq(1, length(AccIn))),
UWithIndex = lists:usort(fun({A, _}, {B, _}) ->
A =< B
end, WithIndex),
Ans0 = lists:sort(fun({_, I1}, {_, I2}) ->
I1 =< I2
end, UWithIndex),
[ V || {V, _} <- Ans0 ].
The great thing about posting to your blog is, you don't have to have a proper conclusion if you don't want to.
Have you ever run into a bug that, no matter how careful you are trying to reproduce it, it only happens sometimes? And then, you think you've got it, and finally solved it - and tested a couple of times without any manifestation. How do you know that you have tested enough? Are you sure you were not "lucky" in your tests?
In this article we will see how to answer those questions and the math behind it without going into too much detail. This is a pragmatic guide.
The following program is supposed to generate two random 8-bit integer and print them on stdout:
#include <stdio.h> #include <fcntl.h> /* Returns -1 if error, other number if ok. */ int get_random_chars(char *r1, char*r2) { int f = open("/dev/urandom", O_RDONLY); if (f < 0) return -1; if (read(f, r1, sizeof(*r1)) < 0) return -1; if (read(f, r2, sizeof(*r2)) < 0) return -1; close(f); return *r1 & *r2; } int main(void) { char r1; char r2; int ret; ret = get_random_chars(&r1, &r2); if (ret < 0) fprintf(stderr, "error"); else printf("%d %d\n", r1, r2); return ret < 0; }
On my architecture (Linux on IA-32) it has a bug that makes it print "error" instead of the numbers sometimes.
Every time we run the program, the bug can either show up or not. It has a non-deterministic behaviour that requires statistical analysis.
We will model a single program run as a Bernoulli trial, with success defined as "seeing the bug", as that is the event we are interested in. We have the following parameters when using this model:
As a Bernoulli trial, the number of errors \(k\) of running the program \(n\) times follows a binomial distribution \(k \sim B(n,p)\). We will use this model to estimate \(p\) and to confirm the hypotheses that the bug no longer exists, after fixing the bug in whichever way we can.
By using this model we are implicitly assuming that all our tests are performed independently and identically. In order words: if the bug happens more ofter in one environment, we either test always in that environment or never; if the bug gets more and more frequent the longer the computer is running, we reset the computer after each trial. If we don't do that, we are effectively estimating the value of \(p\) with trials from different experiments, while in truth each experiment has its own \(p\). We will find a single value anyway, but it has no meaning and can lead us to wrong conclusions.
Another way of thinking about the model and the strategy is by creating a physical analogy with a box that has an unknown number of green and red balls:
Some things become clearer when we think about this analogy:
Before we try fixing anything, we have to know more about the bug, starting by the probability \(p\) of reproducing it. We can estimate this probability by dividing the number of times we see the bug \(k\) by the number of times we tested for it \(n\). Let's try that with our sample bug:
$ ./hasbug 67 -68 $ ./hasbug 79 -101 $ ./hasbug error
We know from the source code that \(p=25%\), but let's pretend that we don't, as will be the case with practically every non-deterministic bug. We tested 3 times, so \(k=1, n=3 \Rightarrow p \sim 33%\), right? It would be better if we tested more, but how much more, and exactly what would be better?
Let's go back to our box analogy: imagine that there are 4 balls in the box, one red and three green. That means that \(p = 1/4\). What are the possible results when we test three times?
Red balls | Green balls | \(p\) estimate |
---|---|---|
0 | 3 | 0% |
1 | 2 | 33% |
2 | 1 | 66% |
3 | 0 | 100% |
The less we test, the smaller our precision is. Roughly, \(p\) precision will be at most \(1/n\) - in this case, 33%. That's the step of values we can find for \(p\), and the minimal value for it.
Testing more improves the precision of our estimate.
Let's now approach the problem from another angle: if \(p = 1/4\), what are the odds of seeing one error in four tests? Let's name the 4 balls as 0-red, 1-green, 2-green and 3-green:
The table above has all the possible results for getting 4 balls out of the box. That's \(4^4=256\) rows, generated by this python script. The same script counts the number of red balls in each row, and outputs the following table:
k | rows | % |
---|---|---|
0 | 81 | 31.64% |
1 | 108 | 42.19% |
2 | 54 | 21.09% |
3 | 12 | 4.69% |
4 | 1 | 0.39% |
That means that, for \(p=1/4\), we see 1 red ball and 3 green balls only 42% of the time when getting out 4 balls.
What if \(p = 1/3\) - one red ball and two green balls? We would get the following table:
k | rows | % |
---|---|---|
0 | 16 | 19.75% |
1 | 32 | 39.51% |
2 | 24 | 29.63% |
3 | 8 | 9.88% |
4 | 1 | 1.23% |
What about \(p = 1/2\)?
k | rows | % |
---|---|---|
0 | 1 | 6.25% |
1 | 4 | 25.00% |
2 | 6 | 37.50% |
3 | 4 | 25.00% |
4 | 1 | 6.25% |
So, let's assume that you've seen the bug once in 4 trials. What is the value of \(p\)? You know that can happen 42% of the time if \(p=1/4\), but you also know it can happen 39% of the time if \(p=1/3\), and 25% of the time if \(p=1/2\). Which one is it?
The graph bellow shows the discrete likelihood for all \(p\) percentual values for getting 1 red and 3 green balls:
The fact is that, given the data, the estimate for \(p\) follows a beta distribution \(Beta(k+1, n-k+1) = Beta(2, 4)\) (1) The graph below shows the probability distribution density of \(p\):
The R script used to generate the first plot is here, the one used for the second plot is here.
What happens when we test more? We obviously increase our precision, as it is at most \(1/n\), as we said before - there is no way to estimate that \(p=1/3\) when we only test twice. But there is also another effect: the distribution for \(p\) gets taller and narrower around the observed ratio \(k/n\):
So, which value will we use for \(p\)?
By using this framework we have direct, visual and tangible incentives to test more. We can objectively measure the potential contribution of each test.
In order to calculate \(p_{min}\) with the mentioned properties, we have to solve the following equation:
\[\sum_{k=0}^{k}{n\choose{k}}p_{min} ^k(1-p_{min})^{n-k}=\frac{\alpha}{2} \]
\(alpha\) here is twice the error we want to tolerate: 5% for an error of 2.5%.
That's not a trivial equation to solve for \(p_{min}\). Fortunately, that's the formula for the confidence interval of the binomial distribution, and there are a lot of sites that can calculate it:
So, you have tested a lot and calculated \(p_{min}\). The next step is fixing the bug.
After fixing the bug, you will want to test again, in order to confirm that the bug is fixed. How much testing is enough testing?
Let's say that \(t\) is the number of times we test the bug after it is fixed. Then, if our fix is not effective and the bug still presents itself with a probability greater than the \(p_{min}\) that we calculated, the probability of not seeing the bug after \(t\) tests is:
\[\alpha = (1-p_{min})^t \]
Here, \(\alpha\) is also the probability of making a type I error, while \(1 - \alpha\) is the statistical significance of our tests.
We now have two options:
Both options are valid. The first one is not always feasible, as the cost of each trial can be high in time and/or other kind of resources.
The standard statistical significance in the industry is 5%, we recommend either that or less.
Formally, this is very similar to a statistical hypothesis testing.
This file has the results found after running our program 5000 times. We must never throw out data, but let's pretend that we have tested our program only 20 times. The observed \(k/n\) ration and the calculated \(p_{min}\) evolved as shown in the following graph:
After those 20 tests, our \(p_{min}\) is about 12%.
Suppose that we fix the bug and test it again. The following graph shows the statistical significance corresponding to the number of tests we do:
In words: we have to test 24 times after fixing the bug to reach 95% statistical significance, and 35 to reach 99%.
Now, what happens if we test more before fixing the bug?
Let's now use all the results and assume that we tested 5000 times before fixing the bug. The graph bellow shows \(k/n\) and \(p_{min}\):
After those 5000 tests, our \(p_{min}\) is about 23% - much closer to the real \(p\).
The following graph shows the statistical significance corresponding to the number of tests we do after fixing the bug:
We can see in that graph that after about 11 tests we reach 95%, and after about 16 we get to 99%. As we have tested more before fixing the bug, we found a higher \(p_{min}\), and that allowed us to test less after fixing the bug.
We have seen that we decrease \(t\) as we increase \(n\), as that can potentially increases our lower estimate for \(p\). Of course, that value can decrease as we test, but that means that we "got lucky" in the first trials and we are getting to know the bug better - the estimate is approaching the real value in a non-deterministic way, after all.
But, how much should we test before fixing the bug? Which value is an ideal value for \(n\)?
To define an optimal value for \(n\), we will minimize the sum \(n+t\). This objective gives us the benefit of minimizing the total amount of testing without compromising our guarantees. Minimizing the testing can be fundamental if each test costs significant time and/or resources.
The graph bellow shows us the evolution of the value of \(t\) and \(t+n\) using the data we generated for our bug:
We can see clearly that there are some low values of \(n\) and \(t\) that give us the guarantees we need. Those values are \(n = 15\) and \(t = 24\), which gives us \(t+n = 39\).
While you can use this technique to minimize the total number of tests performed (even more so when testing is expensive), testing more is always a good thing, as it always improves our guarantee, be it in \(n\) by providing us with a better \(p\) or in \(t\) by increasing the statistical significance of the conclusion that the bug is fixed. So, before fixing the bug, test until you see the bug at least once, and then at least the amount specified by this technique - but also test more if you can, there is no upper bound, specially after fixing the bug. You can then report a higher confidence in the solution.
When a programmer finds a bug that behaves in a non-deterministic way, he knows he should test enough to know more about the bug, and then even more after fixing it. In this article we have presented a framework that provides criteria to define numerically how much testing is "enough" and "even more." The same technique also provides a method to objectively measure the guarantee that the amount of testing performed provides, when it is not possible to test "enough."
We have also provided a real example (even though the bug itself is artificial) where the framework is applied.
As usual, the source code of this page (R scripts, etc) can be found and downloaded in https://github.com/lpenz/lpenz.org.
## File
file <- "myfile.txt"
## Create connection
con <- file(description=file, open="r")
## Hopefully you know the number of lines from some other source or
com <- paste("wc -l ", file, " | awk '{ print $1 }'", sep="")
n <- system(command=com, intern=TRUE)
## Loop over a file connection
for(i in 1:n) {
tmp <- scan(file=con, nlines=1, quiet=TRUE)
## do something on a line of data
}
by Gregor Gorjanc (noreply@blogger.com) at December 01, 2013 10:55 PM
After some time of using shiny I got to the point where I needed to send some arbitrary data from the client to the server, process it with R and return some other data to the client. As a client/server programming newbie this was a challenge for me as I did not want to dive too deep into the world of web programming. I wanted to get the job done using shiny and preferably as little JS/PHP etc. scripting as possible.
It turns out that the task is quite simple as shiny comes with some currently undocumented functions under the hood that will make this task quite easy. You can find some more information on these functions here.
As mentioned above, I am a web programming newbie. So this post may be helpful for people with little web programming experience (just a few lines of JavaScript are needed) and who want to see a simple way of how to get the job done.
Sending the data from the client to the server is accomplished by the JS function Shiny.onInputChange. This function takes a JS object and sends it to the shiny server. On the server side the object will be accessible as an R object under the name which is given as the second argument to the Shiny.onInputChange function. Let’s start by sending a random number to the server. The name of the object on the server side will be mydata.
Let’s create the shiny user interface file (ui.R). I will add a colored div, another element for verbatim text output called results and add the JavaScript code to send the data. The workhorse line is Shiny.onInputChange(“mydata”, number);. The JS code is included by passing it as a string to the tags$script function.
# ui.R shinyUI( bootstrapPage( # a div named mydiv tags$div(id="mydiv", style="width: 50px; height :50px; left: 100px; top: 100px; background-color: gray; position: absolute"), # a shiny element to display unformatted text verbatimTextOutput("results"), # javascript code to send data to shiny server tags$script(' document.getElementById("mydiv").onclick = function() { var number = Math.random(); Shiny.onInputChange("mydata", number); }; ') ))
Now, on the server side, we can simply access the data that was sent by addressing it the usual way via the input object (i.e. input$mydata. The code below will make the verbatimTextOutput element results show the value that was initially passed to the server.
# server.R shinyServer(function(input, output, session) { output$results = renderPrint({ input$mydata }) })
You can copy the above files from here or run the code directly. When you run the code you will find that the random value in the upper box is updated if you click on the div.
library(shiny) runGist("https://gist.github.com/markheckmann/7554422")
What we have achieved so far is to pass some data to the server, access it and pass it back to a display on the client side. For the last part however, we have used a standard shiny element to send back the data to the client.
Now let’s add a component to send custom data from the server back to the client. This task has two parts. On the client side we need to define a handler function. This is a function that will receive the data from the server and perform some task with it. In other words, the function will handle the received data. To register a handler the function Shiny.addCustomMessageHandler is used. I will name our handler function myCallbackHandler. Our handler function will use the received data and execute some JS code. In our case it will change the color of our div called mydiv according to the color value that is passed from the server to the handler. Let’s add the JS code below to the ui.R file.
# ui.R # handler to receive data from server tags$script(' Shiny.addCustomMessageHandler("myCallbackHandler", function(color) { document.getElementById("mydiv").style.backgroundColor = color; }); ')
Let’s move to the server side. I want the server to send the data to the handler function whenever the div is clicked, i.e. when the value of input$mydata changes. The sending of the data to the client is accomplished by an R function called sendCustomMessage which can be found in the session object. The function is passed the name of the client side handler function and the R object we want to pass to the function. Here, I create a random hex color value string that gets sent to a client handler function myCallbackHandler. The line sending the data to the client is contained in an observer. The observer includes the reactive object input$mydata, so the server will send someting to the client side handler function whenever the values of input$mydata changes. And it changes each time we click on the div. Let’s add the code below to the server.R file.
# server.R # observes if value of mydata sent from the client changes. if yes # generate a new random color string and send it back to the client # handler function called 'myCallbackHandler' observe({ input$mydata color = rgb(runif(1), runif(1), runif(1)) session$sendCustomMessage(type = "myCallbackHandler", color) })
You can copy the above files from here or run the code directly. When you run the code you will see that the div changes color when you click on it.
runGist("https://gist.github.com/markheckmann/7554458")
That’s it. We have passed custom data from the client to the server and back. The following graphics sums up the functions that were used.
The two functions also do a good job passing more complex JS or R objects. If you modify your code to send a JS object to shiny, it will be converted into an R list object on the server side. Let’s replace the JS object we send to the server (in ui.R) with following lines. On the server side, we will get a list.
document.getElementById("mydiv").onclick = function() { var obj = {one: [1,2,3,4], two: ["a", "b", "c"]}; Shiny.onInputChange("mydata", obj); };
Note that now however the shiny server will only execute the function once (on loading), not each time the click event is fired. The reason is, that now the input data is static, i.e. the JS object we send via onInputChange does not change. To reduce workload on the server side, the code in the observer will only be executed if the reactive value under observation (i.e. the value of input$mydata) changes. As this is not the case anymore as the value we pass is static, the observer that sends back the color information to the client to change the color of the div is not executed a second time.
The conversion also works nicely the other way round. We can pass an R list object to the sendCustomMessage function and on the client side it will appear as a JS object. So we are free to pass almost any type of data we need to.
To keep things simple I included the JS code directly into the ui.R file using tags$script. This does not look very nice and you may want to put the JS code in a separate file instead. For this purpose I will create a JS file called mycode.js and include all the above JS code in it. Additionally, this file has another modification: All the code is wrapped into some JS/jQuery code ($(document).ready(function() { })that will make sure the JS code is run after the DOM (that is all the HTML elements) is loaded. Before, I simply placed the JS code below the HTML elements to make sure they are loaded, but I guess this is no good practice.
// mycode.js $(document).ready(function() { document.getElementById("mydiv").onclick = function() { var number = Math.random(); Shiny.onInputChange("mydata", number); }; Shiny.addCustomMessageHandler("myCallbackHandler", function(color) { document.getElementById("mydiv").style.backgroundColor = color; } ); });
To include the JS file shiny offers the includeScript function to include JS files. The server.R file has not changed, the ui.R file now looks like this.
# server.R library(shiny) shinyUI( bootstrapPage( # include the js code includeScript("mycode.js"), # a div named mydiv tags$div(id="mydiv", style="width: 50px; height :50px; left: 100px; top: 100px; background-color: gray; position: absolute"), # an element for unformatted text verbatimTextOutput("results") ))
You can copy the above files from here or run the gist directly from within R.
runGist("https://gist.github.com/markheckmann/7563267")
The above examples are purely artifical as it will not make much sense to let the server generate a random color value and send it back to the client. JS might just do all this on the client side without any need for client/server communiation at all. The examples are just for demonstration purposes to outline the mechanisms you may use for sending custom data to the server or client using the functions supplied by the marvellous shiny package. Winston Chang (one of the RStudio and shiny guys) has some more examples in his testapp repo. Have a look at the message-handler-inline and the message-handler-jsfile folders.
Enjoy!
ped <- data.frame( id=c( 1, 2, 3, 4, 5, 6, 7, 8, 9, 10),
fid=c( NA, NA, 2, 2, 4, 2, 5, 5, NA, 8),
mid=c( NA, NA, 1, NA, 3, 3, 6, 6, NA, 9))
## install.packages(pkgs="pedigreemm")
library(package="pedigreemm")
ped2 <- with(ped, pedigree(sire=fid, dam=mid, label=id))
U <- relfactor(ped2)
A <- crossprod(U)
round(U, digits=2)
## 10 x 10 sparse Matrix of class "dtCMatrix"
## [1,] 1 . 0.50 . 0.25 0.25 0.25 0.25 . 0.12
## [2,] . 1 0.50 0.50 0.50 0.75 0.62 0.62 . 0.31
## [3,] . . 0.71 . 0.35 0.35 0.35 0.35 . 0.18
## [4,] . . . 0.87 0.43 . 0.22 0.22 . 0.11
## [5,] . . . . 0.71 . 0.35 0.35 . 0.18
## [6,] . . . . . 0.71 0.35 0.35 . 0.18
## [7,] . . . . . . 0.64 . . .
## [8,] . . . . . . . 0.64 . 0.32
## [9,] . . . . . . . . 1 0.50
## [10,] . . . . . . . . . 0.66
## To check
U - chol(A)
round(A, digits=2)
## 10 x 10 sparse Matrix of class "dsCMatrix"
## [1,] 1.00 . 0.50 . 0.25 0.25 0.25 0.25 . 0.12
## [2,] . 1.00 0.50 0.50 0.50 0.75 0.62 0.62 . 0.31
## [3,] 0.50 0.50 1.00 0.25 0.62 0.75 0.69 0.69 . 0.34
## [4,] . 0.50 0.25 1.00 0.62 0.38 0.50 0.50 . 0.25
## [5,] 0.25 0.50 0.62 0.62 1.12 0.56 0.84 0.84 . 0.42
## [6,] 0.25 0.75 0.75 0.38 0.56 1.25 0.91 0.91 . 0.45
## [7,] 0.25 0.62 0.69 0.50 0.84 0.91 1.28 0.88 . 0.44
## [8,] 0.25 0.62 0.69 0.50 0.84 0.91 0.88 1.28 . 0.64
## [9,] . . . . . . . . 1.0 0.50
## [10,] 0.12 0.31 0.34 0.25 0.42 0.45 0.44 0.64 0.5
1.
0
0
## install.packages(pkgs="bdsmatrix")
library(package="bdsmatrix")
tmp <- gchol(as.matrix(A))
D <- diag(tmp)
(T <- as(as.matrix(tmp), "dtCMatrix"))
## 10 x 10 sparse Matrix of class "dtCMatrix"
## [1,] 1.000 . . . . . . . . .
## [2,] . 1.0000 . . . . . . . .
## [3,] 0.500 0.5000 1.00 . . . . . . .
## [4,] . 0.5000 . 1.000 . . . . . .
## [5,] 0.250 0.5000 0.50 0.500 1.00 . . . . .
## [6,] 0.250 0.7500 0.50 . . 1.00 . . . .
## [7,] 0.250 0.6250 0.50 0.250 0.50 0.50 1 . . .
## [8,] 0.250 0.6250 0.50 0.250 0.50 0.50 . 1.0 . .
## [9,] . . . . . . . . 1.0 .
## [10,] 0.125 0.3125 0.25 0.125 0.25 0.25 . 0.5 0.5 1
## To chec
k
L
&
lt;
- T %*% diag(sqrt(D))
L - t(U)
(TInv <- as(ped2, "sparseMatrix"))
## 10 x 10 sparse Matrix of class "dtCMatrix" (unitriangular)
## 1 1.0 . . . . . . . . .
## 2 . 1.0 . . . . . . . .
## 3 -0.5 -0.5 1.0 . . . . . . .
## 4 . -0.5 . 1.0 . . . . . .
## 5 . . -0.5 -0.5 1.0 . . . . .
## 6 . -0.5 -0.5 . . 1.0 . . . .
## 7 . . . . -0.5 -0.5 1 . . .
## 8 . . . . -0.5 -0.5 . 1.0 . .
## 9 . . . . . . . . 1.0 .
## 10 . . . . . . . -0.5 -0.5 1
round(DInv <- Diagonal(x=1/Dmat(ped2)), digits=2)
## 10 x 10 diagonal matrix of class "ddiMatrix"
## [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
## [1,] 1 . . . . . . . . .
## [2,] . 1 . . . . . . . .
## [3,] . . 2 . . . . . . .
## [4,] . . . 1.33 . . . . . .
## [5,] . . . . 2 . . . . .
## [6,] . . . . . 2 . . . .
## [7,] . . . . . . 2.46 . . .
## [8,] . . . . . . . 2.46 . .
## [9,] . . . . . . . . 1 .
## [10,] . . . . . . . . . 2.33
round(t(TInv) %*% DInv %*% TInv, digits=2)
## 10 x 10 sparse Matrix of class "dgCMatrix"
## ...
round(crossprod(sqrt(DInv) %*% TInv), digits=2)
## 10 x 10 sparse Matrix of class "dsCMatrix"
## [1,] 1.5 0.50 -1.0 . . . . . . .
## [2,] 0.5 2.33 -0.5 -0.67 . -1.00 . . . .
## [3,] -1.0 -0.50 3.0 0.50 -1.00 -1.00 . . . .
## [4,] . -0.67 0.5 1.83 -1.00 . . . . .
## [5,] . . -1.0 -1.00 3.23 1.23 -1.23 -1.23 . .
## [6,] . -1.00 -1.0 . 1.23 3.23 -1.23 -1.23 . .
## [7,] . . . . -1.23 -1.23 2.46 . . .
## [8,] . . . . -1.23 -1.23 . 3.04 0.58 -1.16
## [9,] . . . . . . . 0.58 1.58 -1.16
## [10,] . . . . . . . -1.16 -1.16
2
.3
3
#
# T
o c
heck
so
l
ve
(A
) - crossprod(sqrt(DInv) %*% TInv)
by Gregor Gorjanc (noreply@blogger.com) at August 13, 2013 02:28 PM
## Collect arguments
args <- commandArgs(TRUE)
## Default setting when no arguments passed
if(length(args) < 1) {
args <- c("--help")
}
## Help section
if("--help" %in% args) {
cat("
The R Script
Arguments:
--arg1=someValue - numeric, blah blah
--arg2=someValue - character, blah blah
--arg3=someValue - logical, blah blah
--help - print this text
Example:
./test.R --arg1=1 --arg2="output.txt" --arg3=TRUE \n\n")
q(save="no")
}
## Parse arguments (we expect the form --arg=value)
parseArgs <- function(x) strsplit(sub("^--", "", x), "=")
argsDF <- as.data.frame(do.call("rbind", parseArgs(args)))
argsL <- as.list(as.character(argsDF$V2))
names(argsL) <- argsDF$V1
## Arg1 default
if(is.null(args$arg1)) {
## do something
}
## Arg2 default
if(is.null(args$arg2)) {
## do something
}
## Arg3 default
if(is.null(args$arg3)) {
## do something
}
## ... your code here ...
by Gregor Gorjanc (noreply@blogger.com) at July 02, 2013 04:55 PM
This blog is moving to blog.r-enthusiasts.com. The new one is powered by wordpress and gets a subdomain of r-enthusiasts.com
.
See you there
by Zachary Deane-Mayer (noreply@blogger.com) at March 17, 2013 04:04 AM
by Zachary Deane-Mayer (noreply@blogger.com) at March 13, 2013 02:36 PM
I'm trying to make improvements to the R Graph Gallery, I'm looking for suggestions from users of the website.
I've started a question on the website's facebook page. Please take a few seconds to vote to existing improvements possibilities and perhaps offer some of your own ideas.
The version 0.3-5 of the bibtex package is on CRAN. This fixes a corner case issue about empty bib files thanks to Kurt Hornik.
The purpose of Rcpp modules has always been to make it easy to expose C++ functions and classes to R. Up to now, Rcpp modules did not have a way to declare inheritance between C++ classes. This is now fixed in the development version, and the next version of Rcpp will have a simple mechanism to declare inheritance.
Consider this simple example, we have a base class Shape with two virtual methods (area and contains) and two classes Circle and Rectangle) each deriving from Shape and representing a specific shape.
The classes might look like this:
And we can expose these classes to R using the following module declarative code:
It is worth noticing that:
R code that uses these classes looks like this:
I recently wanted to construe a dashboard widget that contains some text and other elements using the grid graphics system. The size available for the widget will vary. When the sizes for the elements of the grobs in the widget are specified as Normalised Parent Coordinates the size adjustments happen automatically. Text does not automatically adjust though. The size of the text which is calculated as fontsize times the character expansion factor (cex) remains the same when the viewport size changes. For my widget this would require to adjust the fontsize or cex settings for each case seperately. While this is not really an obstacle, I asked myself how a grob that will adjust its text size automatically when being resized can be construed. Here I jot down my results in the hope that you may find this useful.
First I will create a new grob class called resizingTextGrob that is supposed to resize automatically.
library(grid) library(scales) resizingTextGrob <- function(...) { grob(tg=textGrob(...), cl="resizingTextGrob") }
The grob created by the function contains nothing more than a textGrob. In order for the grob class to print something we need to specify the drawDetails method for our class which will do the drawing. The drawDetails method is called automatically when drawing a grob using grob.draw.
drawDetails.resizingTextGrob <- function(x, recording=TRUE) { grid.draw(x$tg) }
Up to now this will produce the same results as a plain textGrob.
g <- resizingTextGrob(label="test 1") grid.draw(g) grid.text("test 2", y=.4)
Now, before doing the drawing we want to calculate the size of the viewport and adjust the fontsize accordingly. To do this we can take the approach to push a new viewport with an adjusted fontsize before the drawing occurs. To perfom the calculations and and push the viewport we specify a preDrawDetails method. This method is automatically called before any drawing occures. It gives us the chance to do some modifications, like e.g. pushing a viewport.
For this purpose first the available height is calculated. Than the fontsize is rescaled according to the available width. The rescaled fontsize is used for the new viewport. Now for a fully developed class we will want to include these parameters in the grob constructor of course. Or we might define a proportion factor argument by which to shrink the text instead. Anyway, to keep things simple this is not done here.
preDrawDetails.resizingTextGrob <- function(x) { h <- convertHeight(unit(1, "snpc"), "mm", valueOnly=TRUE) fs <- rescale(h, to=c(18, 7), from=c(120, 20)) pushViewport(viewport(gp = gpar(fontsize = fs))) }
To clean up after the drawing the created viewport is popped. This is done in the postDrawDetails which is automatically called after the drawDetails method.
postDrawDetails.resizingTextGrob <- function(x) popViewport()
Now the output will depend on the size of the current viewport. When resizing the device the text size will adjust.
g <- resizingTextGrob(label="test 1") grid.draw(g) grid.text("test 2", y=.4)
Let’s compare the standard textGrob with the new class. For this purpose let’s draw a small clock and display it using different device sizes.
library(gridExtra) a <- seq(2*pi, 2*pi/ 12, length=12) + pi/3 x <- cos(a) / 2 y <- sin(a) / 2 segs <- segmentsGrob(x*.2 + .5, y*.2+.5, x*.3 + .5, y*.3 + .5) # the standard approach tgs.1 <- textGrob(1:12, x*.4 + .5, y*.4 + .5) # the new grob class tgs.2 <- resizingTextGrob(1:12, x*.4 + .5, y*.4 + .5) grid.arrange(grobTree(segs, tgs.1), grobTree(segs, tgs.2))
What it looks like at the beginning.
What it looks like when the device is resized.
Note how the size of the text of the lower clock adjusts to the device size. The text size in the upper graphs remains the same and becomes too big for the clock while it changes for the lower ones.
BTW: The definitive guide for the grid graphics model is the book R graphics by Paul Murrell.