In [7]:
using Random, Distributions, DataFrames, Turing, StatsPlots, Plots
#using Pkg
#Pkg.add(["DataFrames", "Turing", "StatsPlots", "Plots"])

Binary endpoint trial design, Success vs. Fail.

In the example below, I implement a simple Success vs. Fail one arm trial design, with a flat prior. Trial data follow a binomial distribution. For illustrative purposes, I set a flat prior, Uniform(0, 1), which is also a Beta(1, 1) distribution. It is a conjugate prior for Binomial. Posterior has a known distribution Beta(Num_Success + 1, n - Num_Success + 1). Nonetheless, I will implement Turing.jl numerical estimate of the posterior, and use the closed-form Beta distribution to verify our results.

Proof,

$$P(p|data) = \frac{P(data|p)P(p)}{P(data)}$$$$P(p|data) \propto Binomial(x; n, p)Beta(1, 1)$$$$P(p|data) \propto \binom{n}{k}p^{k}(1-p)^{n-k} \times 1$$$$P(p|data) \propto p^{(k+1)-1}(1-p)^{(n-k+1)-1} \times 1$$$$P(p|data) \sim Beta(k+1, n-k+1) \times 1$$
In [50]:
## For a binary endpoint trial design, success or Fail. 
## Sampling distribution - Binomial
## Flat Prior distribution - Uniform(0, 1)

## Specify Trial Parameters
# n = number of patients in a single arm
# p = TRUE probability of success rate given this treatment
N = 10
p = 0.6
##


#= This is it. All we have to do here is to define a macro function for Turing.jl
    data are passed in as an argument, in this case, x - number of success. 
    n is passed in as a parameter - number of trials
    Inside the function we simply define and write out the distributions as we would by hand.
    In this case we define the prior as a m ~ Uniform() distribution
    and data x is binomial given the prior 
=#

@model function binaryTrial(x::Vector{Int64}, n::Int64) 
    m ~ Uniform(0, 1)
    x ~ Binomial(n, m)
end
#= Because of this intuitive way of defining distributions in Turing.jl, we can go wild and define complex
    layers or priors, like so, 
A Normal model with unknown mean and variance.
@model function gdemo(x, y)
    s² ~ InverseGamma(2, 3)
    m ~ Normal(0, sqrt(s²))
    x ~ Normal(m, sqrt(s²))
    y ~ Normal(m, sqrt(s²))
end
(I will showcase more complex cases like this in the next tutorial)
=#

## Simulate trial data
# this seed will give data=7 successes
Random.seed!(5)
d = Binomial(N, p)
# generating just one sample, since one trial is the aggregation of multiple Bernoulli -> one binomial
data = rand(d, 1)

## Complete the function by passing the data into it. Now the object "model" contains all model specifications
model = binaryTrial(data, N)

#= In the next step below we just sample the posterior given a specified sampler
    Sample four chains using multiple threads, each with 1000 samples.
    I'm using No-U-Turn Sampler (NUTS), an extension of Hamilton Monte Carlo (HMC)
    Storing results into "chain" 
=# 
chain = sample(model, NUTS(), 1000)
┌ Info: Found initial step size
│   ϵ = 1.6
└ @ Turing.Inference /Users/caesar/.julia/packages/Turing/rl6ku/src/inference/hmc.jl:188
┌ Warning: The current proposal will be rejected due to numerical error(s).
│   isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/caesar/.julia/packages/AdvancedHMC/kB7Xa/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│   isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/caesar/.julia/packages/AdvancedHMC/kB7Xa/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│   isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/caesar/.julia/packages/AdvancedHMC/kB7Xa/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│   isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/caesar/.julia/packages/AdvancedHMC/kB7Xa/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│   isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/caesar/.julia/packages/AdvancedHMC/kB7Xa/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│   isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/caesar/.julia/packages/AdvancedHMC/kB7Xa/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│   isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/caesar/.julia/packages/AdvancedHMC/kB7Xa/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│   isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/caesar/.julia/packages/AdvancedHMC/kB7Xa/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│   isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /Users/caesar/.julia/packages/AdvancedHMC/kB7Xa/src/hamiltonian.jl:47
Out[50]:
Chains MCMC chain (1000×13×1 Array{Float64, 3}):

Iterations        = 501:1:1500
Number of chains  = 1
Samples per chain = 1000
Wall duration     = 0.75 seconds
Compute duration  = 0.75 seconds
parameters        = m
internals         = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size

Summary Statistics
  parameters      mean       std   naive_se      mcse        ess      rhat   e     Symbol   Float64   Float64    Float64   Float64    Float64   Float64     ⋯

           m    0.6633    0.1404     0.0044    0.0054   710.5397    0.9990     ⋯
                                                                1 column omitted

Quantiles
  parameters      2.5%     25.0%     50.0%     75.0%     97.5% 
      Symbol   Float64   Float64   Float64   Float64   Float64 

           m    0.3669    0.5716    0.6747    0.7709    0.8995
In [52]:
## Verify that the numerical posterior density matches that of Beta(Num_Success + 1, n - Num_Success + 1)

histogram(chain[:m], bins=30, alpha=0.4, label="A", normalize = true) 
plot!(Beta(data[1] + 1, N - data[1] + 1), labels = "Beta($(data[1]+1), $(N - data[1] + 1))")
Out[52]:
In [53]:
## Decision Making
# Against a null of current treatment success rate p* = 0.35
# What is the probability that the treat arm has the same or worse success rate Prob(p <= 0.35)?
sum(chain[:m] .<= 0.35) / size(chain[:m], 1)
Out[53]:
0.019