Galaxy clusters across cosmic time

Blog

I attended this conference in Aix-en-Provence this week. I gave a talk on my hierarchical analysis of galaxy clusters and there was a great turn out from the cluster community but lots of cluster environmentalists! Anyway some of the most interesting talks I found were:

Michael Gregg from UC Davis, gave a really nice talk on some gigantic dust features observed in galaxies infalling in the Coma cluster and believe they were created from the interation between the intercluster medium with the interstellar medium. These structure appear similar to more local structures like the Eagle Nebula but are much much larger and more impressive!

Chris Haines from Arizona University (and previous post-doc at Birmingham)spoke about the newly detected groups infalling onto the LocuSS galaxy clusters. These objects have multi-wavelength follow-up. The X-ray luminosity derived masses suggest that the number density of objects are much higher than that expected from observationally derived cluster mass function suggesting that groups could be biased tracers of the mass distribution.

Fabio Gastaldello who is based in INAF Milano talked about the ongoing problem we have selection functions and in particular the difference in properties of Planck (Sunyaev Zeldovich) detected cluster sample and those detected in X-ray. SZ clusters tend to be deficient in relaxed, cool core objects.

Michael West from the Lowell Observatory gave a lovely talk with many pretty pictures showing the results from his study on the alignment of BCG’s (Brightest cluster galaxies) within superclusters. It seems that the BCG’s are aware of the supercluster/filamentary environment whereas other cluster members are oblivious. They also suggest that AGN within the BCG’s could potentially be aligned with filamentary structure too!

Nicolas Martinet from the University of Bonn presented cosmological results from the KiDS-450 weak lensing peak counts analysis and showed consistent results with the cosmic shear analysis but slightly less in tension with the Planck results. He believes that this method of cosmological analysis should be a much stronger probe of S8 than cosmic shear analysis despite currently having larger uncertainties.

On Relative entropy

Blog

Relative entropy is a really cool tool that which haven’t heard of before you are in for a treat! Also known as Kullback-Leibler divergence, it is the measure of the distance between 2 probability distributions [e.g. p(x) and q(x) ]  on a random variable X. It is defined as:

D(p||q) = \sum \limits_{x \in \chi} p(x) \log \frac{p(x)}{q(x)}

= -\sum p(x) \log q(x) + \sum p(x) \log p(x)
= H(P, Q) - H(P)

Where H(P,Q) is the cross entropy of P & Q and H(P) is the entropy of P. Note: the the log is taken as base 2 if info is measured in unit bits or base e if in unit nats.

It is called entropy because it is related to how p(x) diverges from the uniform distribution on the support of X. The more divergent it is, the larger the relative entropy.

Just like the scientific term of entropy, it is always non-negative and zero entropy means that the 2 distributions are a perfect match.

Note that Relative Entropy is *not* always symmetric! D(p||q)\ne D(q||p) necessarily, as we will see in the next example.

For 2 gaussian distributions, we can write relative entropy in the following way:
D(P_2||P_1)=\frac{1}{2}[(\frac{\mu_1-\mu_2}{\sigma_1})^2 + (\frac{\sigma_2}{\sigma_1})^2+2\log(\frac{\sigma_1}{\sigma_2}) -1]

For a d-dimensional Multivariate normal:
D(p_2||p1) = \frac{1}{2}((\mu_1-\mu_2)^T\Sigma_1^{-1}(\mu_1-\mu_2)+tr(\Sigma_2\Sigma_1^{-1})-d-\log \det (\Sigma_2\Sigma_1^{-1}))

Example Normal distributions:

Let’s make some toy data of 2 gaussian distributions and calculate their relative entropy.

rel_entropy<-function(mu1,sigma1,mu2,sigma2){
 #returns D(P1||P2)
 out=0.5*(((mu1-mu2)/sigma1)^2.0+(sigma2/sigma1)^2.0+2.0*log2(sigma1/sigma2)^2.0-1.0)
 return(out)
}
options(repr.plot.width = 10, repr.plot.height = 6)
par(mfrow=c(1,2))

#first pair of gaussians will vary in mean
m1=0
s1=1
m2=2.3
s2=1
x=seq(-3,3,0.1)
D1=rel_entropy(m1,s1,m2,s2)
plot(c(-3,3),c(0,1), ty='n', xlab = 'x', ylab = 'p(x)', main=paste('D(p1||p2)=', format(D1,digits = 2),sep=''))
lines(x,dnorm(x=x,mean = m1,sd =s1 ), col='sky blue', lwd=3)
lines(x,dnorm(x=x,mean = m2,sd =s2 ), col='pale green',lwd=3)
legend('topleft',legend=c(expression(p[1]*' : '*mu[1]*'=0, '*sigma[1]*'=1'),expression(p[2]*' : '*mu[2]*'=1, '*sigma[2]*'=1')), col=c('sky blue', 'pale green'), lty='solid', bty='n', lwd=3)

#second pair of gaussians will vary in standard deviation
m1=0
s1=1
m2=0
s2=0.5
x=seq(-3,3,0.1)
D1=rel_entropy(m1,s1,m2,s2)
plot(c(-3,3),c(0,1), ty='n', xlab = 'x', ylab = 'p(x)', main=paste('D(p1||p2)=', format(D1,digits = 2),sep=''))
lines(x,dnorm(x=x,mean = m1,sd =s1 ), col='sky blue', lwd=3)
lines(x,dnorm(x=x,mean = m2,sd =s2 ), col='pale green',lwd=3)
legend('topleft',legend=c(expression(p[1]*' : '*mu[1]*'=0, '*sigma[1]*'=1'),expression(p[2]*' : '*mu[2]*'=0, '*sigma[2]*'=0.5')), col=c('sky blue', 'pale green'), lty='solid', bty='n', lwd=3)
d12

On the left plot we see that given the gaussian p2, the entropy is quite large, and since entropy is a measure of the gain in information, we can say that there is a large gain in information. On the the other hand the plot on the right has a lower relative entropy. We expected p1 to give us tighter constraints than p2 so that fact that the posterior got broader shows the information gain from p1 given p2 is smaller.

Now reversing p1 and p2 in the relative entropy calculations results in the following plot:

d21

Notice that in the left plot, the reversal has no effect on the entropy, since the amount of information gain is the same, whereas on the right hand side the entropy has increased much more, the amount of entropy is larger because we obtained tighter constraints on the posterior and thus gained a lot of information.

Surprise

Now relative entropy is all well and good, but it doesn’t really tell us much on its own. This is where Surprise comes in. Surprise S is given by the Relative entropy minus the expected relative entropy and is a measure of tension.

S = D(p_2||p_1)- \langle D \rangle
where
\langle D\rangle = \int p(D_2|D_1)D(p(\Theta|D_2)||p(\theta|D_1)) dD_2
= \int \left[\int p(\Theta|D_1)p(D_2|\Theta)d\Theta\right] D(p(\Theta|D_2)||p(\theta|D_1)) dD_2
is the expected relative entropy, usually given as the mean of the prior distribution of D(p(\Theta|D_2)||p(\theta|D_1))

Now the paper Seehars+2016 is a good demonstration of these 2 tools in action. In their paper they look at the tensions between the cosmological parameters measured by 3 different CMB (see here for a review) survey results, WMAP9, Planck2013 and Planck2015.

In their paper they show that the marginalised posteriors of cosmological parameters from the Planck2013 team seem to be more in agreement with those of Planck2015, and in tension with those of WMAP9.

Screen Shot 2017-06-01 at 16.55.31.png

Seehars+2016 Fig 2. 1D and 2D marginalised posteriors of the cosmological parameters

However on their analysis of Relative entropy and Surprise, they actually found better agreement between WMAP9 and Planck2015. Remember surprise is a measure of tension. The relative entropy was small, but the surprise was negative, which means they were not at all surprised at the Planck2015 results given the WMAP9 results.

Screen Shot 2017-06-01 at 17.00.34

Seehars+2016 Fig 1. Relative entropy and surprise of results from various CMB surveys

From this, they do a PCA analysis and find that whilst in the given marginalised posterior spaces the Planck results are in better agreement, re-parameterising in terms of eigenvector spaces with the largest eigenvalues, they find WMAP9 and Planck2015 to be in better agreement.

Screen Shot 2017-06-01 at 17.09.17.png

Seehars+2016 Fig 4. 1D and 2D marginalised posteriors under the new parameterisation.

Multivariate Gaussian Mixture Model done properly

Blog

Michael Betancourt recently wrote a nice case study describing the problems often encountered with gaussian mixture models, specifically the estimation of parameters of a mixture model and identifiability i.e. the problem with labelling mixtures (http://mc-stan.org/documentation/case-studies/identifying_mixture_models.html). Also there has been suggestions that GMM’s can’t be easily done in Stan. I’ve found various examples online of simple 2d gaussian mixtures, and one (wrong) example of a Multivariate GMM. I wanted to demonstrate that Stan can actually do Multivariate GMM’s and very quickly! But as Mike’s already discussed problems with identifiability are still inherent in the model.

For this I will use R, but of course Stan is also available in wrappers of python, ruby and others. Firstly lets get the required libraries:

library(MASS)
require(rstan)

Then we need to generate some toy data. Working in a 4 dimensional parameter space, I want to create 3 gaussian mixtures at different locations:

#first cluster
mu1=c(0,0,0,0)
sigma1=matrix(c(0.1,0,0,0,0,0.1,0,0,0,0,0.1,0,0,0,0,0.1),ncol=4,nrow=4, byrow=TRUE)
norm1=mvrnorm(30, mu1, sigma1)

#second cluster
mu2=c(7,7,7,7)
sigma2=sigma1
norm2=mvrnorm(30, mu2, sigma2)

#third cluster
mu3=c(3,3,3,3)
sigma3=sigma1
norm3=mvrnorm(30, mu3, sigma3)

norms=rbind(norm1,norm2,norm3) #combine the 3 mixtures together
N=90 #total number of data points 
Dim=4 #number of dimensions
y=array(as.vector(norms), dim=c(N,Dim))
mixture_data=list(N=N, D=4, K=3, y=y)

The model only takes a few lines of code:

mixture_model<-'
data {
 int D; //number of dimensions
 int K; //number of gaussians
 int N; //number of data
 vector[D] y[N]; //data
}

parameters {
 simplex[K] theta; //mixing proportions
 ordered[D] mu[K]; //mixture component means
 cholesky_factor_corr[D] L[K]; //cholesky factor of covariance
}

model {
 real ps[K];
 
 for(k in 1:K){
 mu[k] ~ normal(0,3);
 L[k] ~ lkj_corr_cholesky(4);
 }
 

 for (n in 1:N){
 for (k in 1:K){
 ps[k] = log(theta[k])+multi_normal_cholesky_lpdf(y[n] | mu[k], L[k]); //increment log probability of the gaussian
 }
 target += log_sum_exp(ps);
 }

}

'

To run the model in R only takes 1 line too. Here I use 11000 iteration steps, 1000 of which are warmup (for adaptation of the NUTS sampler parameters). I’ll use only 1 chain for speed:

fit=stan(model_code=mixture_model, data=mixture_data, iter=11000, warmup=1000, chains=1)
SAMPLING FOR MODEL '16de4bc17f41669412586868e09d4c65' NOW (CHAIN 1).

Chain 1, Iteration:     1 / 11000 [  0%]  (Warmup)
Chain 1, Iteration:  1001 / 11000 [  9%]  (Sampling)
Chain 1, Iteration:  2100 / 11000 [ 19%]  (Sampling)
Chain 1, Iteration:  3200 / 11000 [ 29%]  (Sampling)
Chain 1, Iteration:  4300 / 11000 [ 39%]  (Sampling)
Chain 1, Iteration:  5400 / 11000 [ 49%]  (Sampling)
Chain 1, Iteration:  6500 / 11000 [ 59%]  (Sampling)
Chain 1, Iteration:  7600 / 11000 [ 69%]  (Sampling)
Chain 1, Iteration:  8700 / 11000 [ 79%]  (Sampling)
Chain 1, Iteration:  9800 / 11000 [ 89%]  (Sampling)
Chain 1, Iteration: 10900 / 11000 [ 99%]  (Sampling)
Chain 1, Iteration: 11000 / 11000 [100%]  (Sampling)
 Elapsed Time: 13.0271 seconds (Warm-up)
               99.2967 seconds (Sampling)
               112.324 seconds (Total)

From the results we can see we get good convergence:

print(fit)
Inference for Stan model: 16de4bc17f41669412586868e09d4c65.
1 chains, each with iter=11000; warmup=1000; thin=1; 
post-warmup draws per chain=10000, total post-warmup draws=10000.

            mean se_mean   sd    2.5%     25%     50%     75%   97.5% n_eff
theta[1]    0.33    0.00 0.05    0.24    0.30    0.33    0.37    0.43 10000
theta[2]    0.33    0.00 0.05    0.24    0.30    0.33    0.37    0.43 10000
theta[3]    0.33    0.00 0.05    0.24    0.30    0.33    0.36    0.43 10000
mu[1,1]    -0.09    0.01 0.17   -0.41   -0.19   -0.10    0.01    0.28   751
mu[1,2]    -0.01    0.01 0.16   -0.33   -0.11   -0.02    0.08    0.35   952
mu[1,3]     0.13    0.00 0.16   -0.20    0.04    0.13    0.22    0.45  3100
mu[1,4]     0.19    0.00 0.16   -0.15    0.10    0.19    0.29    0.51  2086
mu[2,1]     6.85    0.01 0.12    6.57    6.78    6.87    6.93    7.06   133
mu[2,2]     6.90    0.01 0.12    6.63    6.84    6.92    6.98    7.11   129
mu[2,3]     6.95    0.01 0.11    6.69    6.89    6.96    7.01    7.15   333
mu[2,4]     7.03    0.00 0.11    6.79    6.97    7.03    7.09    7.27   579
mu[3,1]     2.78    0.00 0.13    2.50    2.71    2.80    2.87    2.99  1704
mu[3,2]     2.84    0.00 0.12    2.56    2.77    2.86    2.92    3.05  1005
mu[3,3]     2.91    0.01 0.13    2.62    2.84    2.93    2.99    3.17   179
mu[3,4]     3.12    0.01 0.14    2.85    3.04    3.11    3.21    3.42   451
L[1,1,1]    1.00    0.00 0.00    1.00    1.00    1.00    1.00    1.00 10000
L[1,1,2]    0.00    0.00 0.00    0.00    0.00    0.00    0.00    0.00 10000
L[1,1,3]    0.00    0.00 0.00    0.00    0.00    0.00    0.00    0.00 10000
L[1,1,4]    0.00    0.00 0.00    0.00    0.00    0.00    0.00    0.00 10000
L[1,2,1]    0.73    0.02 0.41   -0.78    0.80    0.85    0.89    0.93   263
L[1,2,2]    0.54    0.00 0.11    0.38    0.46    0.52    0.60    0.83  1141
L[1,2,3]    0.00    0.00 0.00    0.00    0.00    0.00    0.00    0.00 10000
L[1,2,4]    0.00    0.00 0.00    0.00    0.00    0.00    0.00    0.00 10000
L[1,3,1]    0.16    0.08 0.77   -0.89   -0.78    0.70    0.82    0.89    97
L[1,3,2]    0.04    0.02 0.24   -0.44   -0.13    0.07    0.21    0.49   205
L[1,3,3]    0.56    0.00 0.11    0.40    0.49    0.55    0.62    0.81  7240
L[1,3,4]    0.00    0.00 0.00    0.00    0.00    0.00    0.00    0.00 10000
L[1,4,1]    0.15    0.08 0.75   -0.89   -0.76    0.66    0.80    0.88    89
L[1,4,2]    0.20    0.01 0.23   -0.27    0.05    0.23    0.36    0.65   351
L[1,4,3]    0.31    0.00 0.15    0.03    0.23    0.31    0.40    0.63  2280
L[1,4,4]    0.44    0.00 0.08    0.31    0.38    0.43    0.49    0.64  1879
L[2,1,1]    1.00    0.00 0.00    1.00    1.00    1.00    1.00    1.00 10000
L[2,1,2]    0.00    0.00 0.00    0.00    0.00    0.00    0.00    0.00 10000
L[2,1,3]    0.00    0.00 0.00    0.00    0.00    0.00    0.00    0.00 10000
L[2,1,4]    0.00    0.00 0.00    0.00    0.00    0.00    0.00    0.00 10000
L[2,2,1]    0.03    0.11 0.71   -0.87   -0.75    0.36    0.73    0.85    43
L[2,2,2]    0.69    0.00 0.13    0.49    0.60    0.68    0.77    0.98  1978
L[2,2,3]    0.00    0.00 0.00    0.00    0.00    0.00    0.00    0.00 10000
L[2,2,4]    0.00    0.00 0.00    0.00    0.00    0.00    0.00    0.00 10000
L[2,3,1]    0.45    0.05 0.59   -0.78    0.36    0.76    0.83    0.90   130
L[2,3,2]    0.00    0.04 0.39   -0.70   -0.33    0.05    0.33    0.69    89
L[2,3,3]    0.53    0.00 0.11    0.37    0.46    0.52    0.59    0.79   935
L[2,3,4]    0.00    0.00 0.00    0.00    0.00    0.00    0.00    0.00 10000
L[2,4,1]    0.25    0.07 0.68   -0.80   -0.57    0.70    0.84    0.91    89
L[2,4,2]   -0.03    0.07 0.42   -0.74   -0.38   -0.07    0.33    0.72    40
L[2,4,3]   -0.15    0.02 0.22   -0.55   -0.30   -0.17    0.01    0.27   122
L[2,4,4]    0.46    0.00 0.09    0.31    0.40    0.45    0.51    0.69   883
L[3,1,1]    1.00    0.00 0.00    1.00    1.00    1.00    1.00    1.00 10000
L[3,1,2]    0.00    0.00 0.00    0.00    0.00    0.00    0.00    0.00 10000
L[3,1,3]    0.00    0.00 0.00    0.00    0.00    0.00    0.00    0.00 10000
L[3,1,4]    0.00    0.00 0.00    0.00    0.00    0.00    0.00    0.00 10000
L[3,2,1]    0.45    0.04 0.59   -0.83    0.49    0.75    0.82    0.89   178
L[3,2,2]    0.66    0.00 0.12    0.46    0.57    0.64    0.73    0.96  1936
L[3,2,3]    0.00    0.00 0.00    0.00    0.00    0.00    0.00    0.00 10000
L[3,2,4]    0.00    0.00 0.00    0.00    0.00    0.00    0.00    0.00 10000
L[3,3,1]    0.58    0.08 0.55   -0.81    0.71    0.82    0.87    0.92    48
L[3,3,2]    0.01    0.03 0.28   -0.57   -0.19    0.07    0.21    0.48   121
L[3,3,3]    0.53    0.00 0.10    0.37    0.45    0.51    0.58    0.78   580
L[3,3,4]    0.00    0.00 0.00    0.00    0.00    0.00    0.00    0.00 10000
L[3,4,1]   -0.44    0.06 0.64   -0.90   -0.84   -0.77   -0.51    0.87   104
L[3,4,2]   -0.13    0.02 0.29   -0.59   -0.33   -0.20    0.07    0.49   236
L[3,4,3]   -0.10    0.02 0.22   -0.48   -0.25   -0.12    0.05    0.35   101
L[3,4,4]    0.48    0.00 0.09    0.35    0.42    0.47    0.53    0.69  2954
lp__     -443.66    0.15 5.00 -454.49 -446.87 -443.28 -440.12 -435.03  1053
         Rhat
theta[1] 1.00
theta[2] 1.00
theta[3] 1.00
mu[1,1]  1.00
mu[1,2]  1.00
mu[1,3]  1.00
mu[1,4]  1.00
mu[2,1]  1.02
mu[2,2]  1.02
mu[2,3]  1.01
mu[2,4]  1.00
mu[3,1]  1.00
mu[3,2]  1.00
mu[3,3]  1.00
mu[3,4]  1.00
L[1,1,1]  NaN
L[1,1,2]  NaN
L[1,1,3]  NaN
L[1,1,4]  NaN
L[1,2,1] 1.00
L[1,2,2] 1.00
L[1,2,3]  NaN
L[1,2,4]  NaN
L[1,3,1] 1.00
L[1,3,2] 1.00
L[1,3,3] 1.00
L[1,3,4]  NaN
L[1,4,1] 1.00
L[1,4,2] 1.00
L[1,4,3] 1.00
L[1,4,4] 1.00
L[2,1,1]  NaN
L[2,1,2]  NaN
L[2,1,3]  NaN
L[2,1,4]  NaN
L[2,2,1] 1.02
L[2,2,2] 1.00
L[2,2,3]  NaN
L[2,2,4]  NaN
L[2,3,1] 1.00
L[2,3,2] 1.04
L[2,3,3] 1.00
L[2,3,4]  NaN
L[2,4,1] 1.02
L[2,4,2] 1.02
L[2,4,3] 1.01
L[2,4,4] 1.00
L[3,1,1]  NaN
L[3,1,2]  NaN
L[3,1,3]  NaN
L[3,1,4]  NaN
L[3,2,1] 1.01
L[3,2,2] 1.00
L[3,2,3]  NaN
L[3,2,4]  NaN
L[3,3,1] 1.01
L[3,3,2] 1.01
L[3,3,3] 1.00
L[3,3,4]  NaN
L[3,4,1] 1.00
L[3,4,2] 1.00
L[3,4,3] 1.02
L[3,4,4] 1.00
lp__     1.00

Samples were drawn using NUTS(diag_e) at Tue Mar 21 10:26:33 2017.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1)

As you can see we get very good Rhat values and effective samples, also the timescale is reasonable. We recover the input parameters really well too

params=extract(fit)
#density plots of the posteriors of the mixture means
par(mfrow=c(2,2))
plot(density(params$mu[,1,1]), ylab='', xlab='mu[1]', main='')
lines(density(params$mu[,1,2]), col=rgb(0,0,0,0.7))
lines(density(params$mu[,1,3]), col=rgb(0,0,0,0.4))
lines(density(params$mu[,1,4]), col=rgb(0,0,0,0.1))
abline(v=c(0), lty='dotted', col='red',lwd=2)

plot(density(params$mu[,2,1]), ylab='', xlab='mu[2]', main='')
lines(density(params$mu[,2,2]), col=rgb(0,0,0,0.7))
lines(density(params$mu[,2,3]), col=rgb(0,0,0,0.4))
lines(density(params$mu[,2,4]), col=rgb(0,0,0,0.1))
abline(v=c(7), lty='dotted', col='red',lwd=2)

plot(density(params$mu[,3,1]), ylab='', xlab='mu[3]', main='')
lines(density(params$mu[,3,2]), col=rgb(0,0,0,0.7))
lines(density(params$mu[,3,3]), col=rgb(0,0,0,0.4))
lines(density(params$mu[,3,4]), col=rgb(0,0,0,0.1))
abline(v=c(3), lty='dotted', col='red',lwd=2)
Multivariate gaussian mixture components

Marginalised 1D posteriors of the 3 gaussian mixture means. The red dotted line is the truth

Top tips on maximising your publication reach

Blog

When publishing a paper on http://www.arxiv.org, here are some neat ways to make sure you maximise your impact.

  1. Getting to the top

    There has been research to suggest a correlation between the top Arxiv submission and the number of citations.

    As of 2017, Arxiv’s new submission deadline for the next day submissions is 2pm (EST), so in the UK that’s 7pm! Now to get to top on both the daily email list and the website you need to submit 14:01 the 2 days before you want your submission online (when previously it was 13:59 the day before). This will ensure your paper is the first to be seen.

    You can check the deadline to submissions here: https://arxiv.org/localtime

  2. Staying at the top

    To make sure you paper is front page for as long as possible you should aim to have your paper appear on a Friday, this means that you get to stay on top all weekend and maximising the probability that your paper will be noticed. This means the ideal time for submission is 14:01 on Wednesdays if you’re in NYC, or 19:01 on Wednesday if you’re in the UK.

  3. Time of year

    Another thing to keep in mind is the time of year, if you’re posting over the christmas holidays its highly likely that people will miss your paper. Similarly in the summer, many people are vacationing or away at conferences with no time to checkup on Arxiv. Be smart about the time of year you submit!

  4. Social media

    Be smart when posting to social media – one line overview and “the money plot”, can be very effective for those who don’t have the time to read your paper.

Thanks for reading and I hope you find this useful. Also if you have any more tips you think I should add, please get in touch!

Best space themed christmas ideas…

Blog

Its almost christmas and time to think about christmas presents. Below i list my top 10 christmas presents for the astronomy/space obsessed!

10. Time Travelling Toby and the Apollo moon landing by Graham Jones

ttt-apollo-moon-landing-cover1

Graham Jones’ exciting take on the Apollo moon landings, this beautifully illustrated book is the perfect gift for those young kids (1-10) as approved by yours truly. Buy here

 

9. The Big Picture: On the origins of life, meaning and the Universe itself by Sean Carroll

51k4peeal1l-_sx329_bo1204203200_

For the slightly older readers let Caltech’s theoretical physicist Sean Carroll lead you through a passage from the big bang to the current day and the meanings of our very existence. Buy here

8. Rosetta & Philae Plush

cfukmgpviaavdvo

Whilst both Philae & Rosetta have met there ends on comet 67P, you can still remember them through this adorable plush. http://www.rosettashop.eu/

7. Meteorite Jewellery

il_570xn-788940145_qkx5

Yes woman love jewellery! and how can you get more romantic than a jewellery that originated from outer space? it’s also so so pretty! Buy here

6. Mars Rover Rescue by Andrew Rader

51iw5lcueml-_sy498_bo1204203200_

Another gorgeously illustrated space book, this time we go to Mars! (Ages 1-10). Buy here

5. a telescope

screen-shot-2016-12-09-at-15-41-34

The ideal telescope for any aspiring astronomer, equipped with solar filter so even if you live in the UK and the weather isn’t so great at night you can still see something in the day. Buy here

4. A space bag!

00560940_1

I LOVE this space duffle from Herschel Supply Co. It’s out of this world! Buy here

3. Stocking filler mug

hst_merchandise_0002

Hubble is a joint ESA-NASA optical/IR telescope that has a resolution capable of seeing a person in Paris from NewYork. It is awesome! Buy Me

2. Edible space candy

il_570xn-762108045_9bec

No words….. mmmmm edible space systems… Buy me

1 Galaxy murals

westerlund-cluster-space-room-820x532

There’s no space like home 🙂 Buy here

Can being too opinionated be doing us more harm than good?

Blog

Ok this is going to be a really quick post and not much like my others but recently I met some fellow scientists and it wasn’t before long that our conversation steered towards that of space travel and in particular the prospect of a moon village.

Now it would not be suprising to me if there are many members of the public who find the idea controversial, however I did not at all expect such antagonism from scientists. I for one am enthusiastic about all aspects of science. None the less they did not regard the moon village or even the space program a science at all and were only interested in “wasting” money on their own science interest goals.

With the growth of social media and its ability to facilitate with our communication to the world, it’s concerning how it is easier now than ever to spread hate and anger that could be damaging (and without consequences)

With decreasing science budgets, a scientist may believe that discrediting other areas of science will bring more money into their own project but more likely it could damage the reputation of science community and lead to further funding cuts to science as a whole.

Exomars I

Blog

In today’s post I want to talk about the Exomars mission, what it’s all about, all the drama in the news and also the future of Exomars.

For those that don’t know, ExoMars is joint mission between ESA (European space agency) and ROSCOSMOS (Russian space agency) to search for the evidence of life on Mars.

Several scientific groups from both ground (KECK telescopes) and space (Mars Express) based observations have suggested large plumes of methane in the atmosphere. This is super exciting since 90% of the methane on Earth is produced from living organisms.

Exomars part 1 consists of the trace gas orbiter (TGO) and Schiaparelli the lander – it’s only their second attempt to land on Mars, even though their first attempt (Beagle 2) didn’t go so well. Their goal not only were to establish the present of life Mars but also to test the technology for future missions. TGO has the tools to sense methane and other trace gases (gases that make up <1% of the atmosphere) in Mars’ atmosphere and Schiaparelli has a small camera and sensors to evaluate the performance of the landing procedure.

It took Exomars 7 months to reach the planet, at which point (16th October 2016) TGO and Schiaparelli separated. Whilst TGO begins manoeuvres to steer into its orbit, Schiaparelli slowly coasts (over 3 days!!) to the edge of Mar’s atmosphere and then ~6mins to reach the surface. Whilst it may seem trivial, landing on Mars is difficult! Not only is there a 10 minute information lag, but a global dust storm is close approaching so the weather conditions aren’t the greatest for landing, AND the lander is travelling at 18000mph!!!

We had hoped that Schiaparelli would join NASA’s rover Opportunity in the Meridiani Planum region, an area where liquid water is likely to be found.

Many groups were listening out for Schiaparelli during it’s descent including Mars Express and the GMRT (in India), unfortunately his signal was cut short. Also the lander was configured to go into hibernation 15minutes after landing, so the fate of Schiaparelli is still unknown. Data analysed from the descent suggested the ejection of the heat shield and the parachute occurred much earlier than planned and possibly problems with the thrusters (used to slow it down).

TGO on the other hand had a successful orbit insertion! Hurrah! The main goal of the lander was to test technologies for future missions to Mars, so the data generated from the descent will prove very useful to engineers back on Earth.

Exomars part II consists of a rover equipped with a panchromatic camera for stereoscopic visuals and a drill to take subsurface samples of soil, 2m below ground. Originally planned for a 2018 launch but insufficient funding has caused the mission to slip now to 2020… With the fate of Schiaparelli unknown, it is difficult to say whether Exomars rover will ever get to Mars however a lot of research and work has already been spent on it and it would be an awful shame to give up on it now!

 

Science and games

Blog

Over the past year or so, I have been helping a small games company called Atomicom develop a new game. But how does a person like me fit into the gaming world? Well I was given the role of scientific advisor – simple task for a PhD in astrophysics right? think again.

JCB Pioneer Mars is a strategy survival game where you must survive, build and mine on Mars. Set in the future, you can expect to find futuristic JCB mining vehicles and stunning visuals of a Mars-esque world.

crmeaftxyaepo02

credit: JCB Pioneer Mars

The problem with introducing science to a game is keeping it from being boring. To me science has always been fun, but to your average gamer who is constantly bombarded with action, how do you think he will find the desolate Mars surface? In games there can be a constant struggle between keeping science facts true and the game interesting.

The conditions on Mars are very extreme, with super cool phenomena such as global/local dust storms, dust devils, meteor impacts, and more. These features however according to game developers these are not near enough exciting as they can be.

Typical wind speeds on mars are about 20 m/s that’s an 8 on the beaufort scale, a gale but nothing life threatening… in the game these winds appear much faster and are able to lift considerably large martian rocks/dust.

Spinning columns of dust “dust-devils” roam freely, towering kilometres wide and meters wide, although on Mars you can expect to see a handful of dust devils a day, in game expect to see dust devils travelling in pacts and with electrifying capabilities (yes you heard me correctly :S).

With meteor impacts on Earth, we barely get much of a light show (and that’s with an atmosphere to burn up in!). On many games however you will see meteor storms come like a showering rain of fireballs…. I have no comment…

Don’t get me wrong. The Mars environment is an extremely harsh one, and to survive on Mars is going to be difficult, but why games need to make conditions a million times harsher I will never understand…

esp_026394_2160-2

Dust devil on Mars Credit: HIRISE/NASA

Lastly I’d like to point out how ridiculous it is to name things with a “sciencey” name because it sounds cool regardless of its accuracy is just mind blowing to me. One conversation I had went a bit like this:

Me: “‘Spectral analysis shows that the vehicle is operational?’ – that doesn’t make sense… that’s not how spectral analysis works”

Colleague: “Of course it is what’s wrong with it?”

Me: “Well spectral analysis means you look at the whole spectra to see emission/absorption at different wavelengths of light, you wouldn’t do that to check vehicle is operational”

Colleague: “Yeh but spectral sounds cool. It means light and we use visual inspection ”

Me: “If you only rely on visual then its not really using the whole spectrum…”

Colleague: “Well maybe we use X-ray and infrared too. You know like Superman’s X-ray vision… how does that work?…..”

Any way to cut a long story short, that has been pretty much my experience of working in the games industry. You give advice, most of the time they don’t take it and if they do, expect it to be nothing like you said, but at least you’ve taught them something.

Actually  to be fair a lot of science did make it into the game, including real data from HIRISE and the Mars rovers to give a realistic experience. At the end of the day, the game looks stunning and I’m certain that it will be an amazing experience to play, even if it is a bit over exaggerated.

JCB Pioneer Mars is accepting beta volunteers now, and will debut at UK gaming conference EGX from Thursday  22nd – Sunday 25th September.

Galactica VR experience

Blog

So yesterday I submitted my PhD thesis – a show of the 3.5 years of hard work I have devoted to science. To celebrate, i treated myself to Alton Towers! For those that don’t know, Alton Towers is the largest theme park in the UK and just last week they launched the first UK virtual reality ride – GALACTICA. I went to test it out and I thought I’d share with you my experience.

Luckily for me, I visited the attraction on a quiet day so the queues were non-existent. I was super excited so walked hastily through the queue line, at which point a photograph (typical of all roller coasters) is the first thing that you do – you get superimposed as an astronaut!

When its time to get on the ride, the restraint comes down and you have to put on the VR headset. It’s not exactly the most comfortable thing I’ve worn since it feels quite heavy, however the adjustment on the back of your head ensures it stays on and its also attached to the restraint. My first thoughts were that it isn’t very clear but you can adjust the focus specifically for your eyes. DSC_0087

Before you know it, the floor disappears, you are pivoted to lie face down and a woman’s voice begins to speak – you are in a future world – 3010 to be exact and are about to go on a trip around the Universe. In preparation for the flight you fly around a futuristic city with droids and drones all over the place, it was just so amazing.

You go through a portal, (where the icy mist spray just intensifies the whole experience) and end up flying though an asteroid belt, seeing the birth of a star, visiting an icy world and it was all just so unbelievably beautiful! DSC_0082

Overall the entire experience made me feel really emotion. I really enjoyed it, I felt like I was in space and it made me envious that I don’t live in that futuristic era where people can easily visit different worlds.

If you get the chance to I would definitely recommend you to go try out this ride, unfortunately its quite short but I think this really is a game changer and hope to see more virtual reality experiences in the future. With it, we can do almost anything that we ever wanted to do – travel through space and time, visit dinosaurs, fall into a black hole…. the opportunities are endless!