No Such Thing as Dark Matter

We’ve not been able to detect dark matter yet. Natalie Wolchover explains summarizes theories that could explain the way the universe works without having dark matter.

Key to it is the Modified Newtonian Dynamics (MOND) equation to explain why the stars at the outer edges of galaxies are moving faster than Newton’s force law predicts they should be.

Velocities of stars further away from the center of the galactic disk (larger R) have a higher velocity (V) than predicted by Newtonian physics. Dark matter has been used to explain this discrepancy, but tweaking the physics equations could do so as well. Image from Wikipedia.

Newton’s Second Law, finds that the Force (F) acting on an object is equal its mass (m) multiplied by its acceleration (a).

 F = m \cdot a

The MOND equation adjusts this by adding in another multiplication factor (μ)

 F = \mu \cdot m \cdot a

μ is just really close to 1 under “normal” everyday conditions, but gets bigger when accelerations are really, really small. Based on the evidence so far an equation for μ may be:

 \mu = \frac{a}{a_0} \frac{1}{\sqrt{1+\left(\frac{a}{a_0}\right)^2}}

where, a₀ is a really, really small acceleration.

Factoring this μ factor into the equation for the force due to gravity ( F_g ) changes it from:

 F_g = G \frac{ m_1 \cdot m_2}{r^2}

into:

 F_g = G \frac{(m_1 \cdot m_2)}{r^2} + \frac{\sqrt{G \cdot \m_1 \cdot m_2 \cdot a_0}}{r}

The key point is that in the first term, which is our standard version, the denominator is the radius squared (r^2) while the second term has a plain radius denominator (r).

This means as the distance between two objects gets larger, the first term decreases much faster and the second term becomes more important.

As a result, the gravitational pull between, say a star at the edge of a galaxy and the center of the galaxy, is not as small as the standard gravitational equation would predict it would be, and the stars a the edge of galaxies move faster than they would be predicted to be without the additional term.

References:

What is Dark Matter?

Adam Hadhazy, in Discover Magazine, summarizes the top candidates to explain dark matter and the experiments in progress to find them. These include, WIMPs (Weakly Interacting Massive Particles, Axions, Sterile Neutrinos, and SIMPs (Strongly Interacting Massive Particles.

Distortions in the shapes of galaxies caused by gravitational lensing. While gravitational lensing is caused by anything with gravity (this means normal matter as well) the lensing effect of dark matter is a key form of evidence for its presence. Image of the galaxy cluster Abell 2218 via Wikimedia Commons.

via Brian Resnick on Vox, who provides some very interesting historical context on the discovery of dark matter.

Electron Configuration Practice

A quick electron configuration practice webpage that lets you enter the symbol for an element and see if you can write out the electron configuration in both the full and noble gas forms.

Screen capture from the electron configuration webpage. Sulfur (S) is entered, and then the long form and noble gas form of the configurations can be entered and checked. In this case, there is an error in one part of the noble gas form.

The table at the bottom is a guide to filling the electron shells and orbitals. You can click any of the blue squares to change the number of electrons in the orbital.

Update

An improved version of the lower, table part is here.

Linearizing an Exponential Function: Radioactive Decay

Using this data for the decay of a radioisotope, find its half life.

t (s)A (g)
0100
10056.65706876
20032.10023441
30018.18705188
40010.30425049
5005.838086287
6003.307688562

We can start with the equation for decay based on the half life:

   A = A_0 (\frac{1}{2})^\frac{t}{\lambda}  

 
where:
 A = \text{Amount of radioisotope (usually a mass)}  A_0 = \text{Initial amount of radioisotope (usually a mass)}   t = \text{time}  \lambda = \text{half life}  

and linearize (make it so it can be plotted as a straight line) by using logarithms.

Take the log of each side (use base 2 because of the half life):

  \log_2{(A)} = \log_2{  \left( A_0 (\frac{1}{2})^\frac{t}{\lambda} \right)} 

Use the rules of logarithms to simplify:

 \log_2{(A)} = \log_2{ ( A_0 )} + \log_2{  \left( (\frac{1}{2})^\frac{t}{\lambda} \right)}   

  \log_2{(A)} = \log_2{ ( A_0 )} +  \frac{t}{\lambda}  \log_2{   (\frac{1}{2}) }      

 \log_2{(A)} = \log_2{ ( A_0 )} +  \frac{t}{\lambda}  (-1)   

  \log_2{(A)} = \log_2{ ( A_0 )} -  \frac{t}{\lambda}       

Finally rearrange a little:

  \log_2{(A)} =  -  \frac{t}{\lambda}  +  \log_2{ ( A_0 )}      

  \log_2{(A)} =  -  \frac{1}{\lambda} t +  \log_2{ ( A_0 )}       

Now, since the two variables in the last equation are A and t we can see the analogy between this equation and the equation of a straight line:

 \log_2{(A)} =  -  \frac{1}{\lambda} t +  \log_2{ ( A_0 )}        

and,

   y =  m x +  b       

where:

   y = \log_2{(A)}        

   m =  -  \frac{1}{\lambda}        

   x = t       

   b =  \log_2{ ( A_0 )}        

So if we draw a graph with log₂(A) on the y-axis, and time (t) on the x axis, the slope of the line should be:

   m =  -  \frac{1}{\lambda}        

Which we can use to find the half life (λ).

Radioactive Half Lives

Since we most commonly talk about radioactive decay in terms of half lives, we can write the equation for the amount of a radioisotope (A) as a function of time (t) as:

  A = A_0 (\frac{1}{2})^\frac{t}{\lambda} 

where:
 A = \text{Amount of radioisotope (usually a mass)}  A_0 = \text{Initial amount of radioisotope (usually a mass)}   t = \text{time}  \lambda = \text{half life} 

To reverse this equation, to find the age of a sample (time) we would have to solve for t:

Take the log of each side (use base 2 because of the half life):

  \log_2{(A)} = \log_2{  \left( A_0 (\frac{1}{2})^\frac{t}{\lambda} \right)} 

Use the rules of logarithms to simplify:

 \log_2{(A)} = \log_2{ ( A_0 )} + \log_2{  \left( (\frac{1}{2})^\frac{t}{\lambda} \right)}   

  \log_2{(A)} = \log_2{ ( A_0 )} +  \frac{t}{\lambda}  \log_2{   (\frac{1}{2}) }      

 \log_2{(A)} = \log_2{ ( A_0 )} +  \frac{t}{\lambda}  (-1)   

 \log_2{(A)} = \log_2{ ( A_0 )} -  \frac{t}{\lambda}      

Now rearrange and solve for t:

 \log_2{(A)} - \log_2{ ( A_0 )} = -  \frac{t}{\lambda}      

 -\lambda \left( \log_2{(A)} - \log_2{ ( A_0 )} \right) = t      

  -\lambda \cdot \log_2{ \left( \frac{A}{A_0} \right)}  = t      
 

So we end up with the equation for time (t):

  t = -\lambda \cdot \log_2{ \left( \frac{A}{A_0} \right)}         
 

Now, because this last equation is a linear equation, if we’re careful, we can use it to determine the half life of a radioisotope. As an assignment, find the half life for the decay of the radioisotope given below.

t (s)A (g)
0100
10056.65706876
20032.10023441
30018.18705188
40010.30425049
5005.838086287
6003.307688562

Comparing Covid Cases of US States

Missouri’s confirmed cases (z-score) compared to the other U.S. states from April 20th to October 3rd, 2020. The z-score is a measure of how far you are away from the average. In this case, a negative z-score is good because it indicates that you’re below the average number of cases (per 1000 people). For all the states.

Based on my students’ statistics projects, I automated the method (using R) to calculate the z-score for all the states in the U.S. We used the John Hopkins daily data.

I put graphs for all of the states on the COVID: The U.S. States Compared webpage.

The R functions (test.R) assumes all of the data is in a folder (COVID-19-master/csse_covid_19_data/csse_covid_19_daily_reports_us/), and outputs the graphs to the folder ‘images/zscore/‘ which needs to exist.

covid_data <- function(infile, state="Missouri") {
    filename <- paste(file_dir, infile, sep='')
    mydata <- read.csv(filename)
    pop <- read.csv('state_populations.txt')
    mydata <- merge(mydata, pop)
    mydata$ConfirmedPerCapita1000 <- mydata$Confirmed / mydata$Population *1000
    summary(mydata$ConfirmedPerCapita1000)
    stddev <- sd(mydata$ConfirmedPerCapita1000)
    avg <- mean(mydata$ConfirmedPerCapita1000)
    cpc1k <- mydata[mydata$Province_State == state,]$ConfirmedPerCapita1000
    zscore <- (cpc1k - avg)/stddev
    #print(infile, zscore)
    return(zscore)
}


get_zScore_history <-function(state='Missouri') {
  df <- data.frame(Date=as.Date(character()), zscore=numeric())
  for (f in datafiles){
    dateString <- as.Date(substring(f, 1, 10), format='%m-%d-%y')
    zscore <- covid_data(f, state=state)
    df[nrow(df) + 1,] = list(dateString, zscore)
  }
  df$day <- 1:nrow(df)

  plot_zScore(df, state)


  # LINEAR REGRESSIONS:
  # http://r-statistics.co/Linear-Regression.html
  lmod <- lm(day ~ zscore, df)
  return(df)
}

plot_zScore <- function(df, state){
  max_z <- max( abs(max(df$zscore)), abs(min(df$zscore)))
  print(max_z)


  zplot <- plot(x=df$day, y=df$zscore, main=paste('z-score: ', state), xlab="Day since April 20th, 2020", ylab='z-score', ylim=c(-max_z,max_z))
  abline(0,0, col='firebrick')
  dev.copy(png, paste('images/zscore/', state, '-zscore.png', sep=''))
  dev.off()
}

get_states <- function(){
  lastfile <- datafiles[ length(datafiles) ]
  filename <- paste(file_dir, lastfile, sep='')
  mydata <- read.csv(filename)
  pop <- read.csv('state_populations.txt')
  mydata <- merge(mydata, pop)
  return(mydata$Province_State)
}

graph_all_states <- function(){
  states <- get_states()
  for (state in states) {
    get_zScore_history(state)
  }
}

file_dir <- 'COVID-19-master/csse_covid_19_data/csse_covid_19_daily_reports_us/'
datafiles <- list.files(file_dir, pattern="*.csv")

print("To get the historical z-score data for a state run (for example):")
print(" > get_zScore_history('New York')" )

df = get_zScore_history()

You can run the code in test.R in the R console using the commands:

> source('test.R')

which does Missouri by default, but to do other states use:

> get_zScore_history('New York')

To get all the states use:

> graph_all_states()