

I think your only option for speeding this up will be to take a smaller sample or get (access)a faster computer. Based on the algorithm, I don't think there is a way to speed things up via parallel processing, since to optimize the separation between sample points, you need to know the location of the all the sample points. It seems to me you might be in for a very long wait indeed. Performance<-rbind(performance,ame(time=time, n=i)) Invisible(optimumLHS(n = i, k = 7, verbose = FALSE)) If worse comes to worst, I could even accept a solution in a different environment than R, but it must be either MATLAB or a open source environment.

I don't have to use the lhs package necessarily - my only requirement is that I would like to generate an LHS design which is optimized in terms of S-optimality criterion, maximin metric, or some other similar optimality criterion (thus, not just a vanilla LHS).
OPTIMAL LATIN HYPERCUBE SAMPLING HOW TO
Is there anything I could do to speed it up? I heard that parallel computing may help with R, but I don't know how to use it, and I have no idea if it speeds up only code that I write myself, or if it could speed up an existing package function such as optimumLHS. Lhs_design <- optimumLHS(n = 400, k = 7, verbose = TRUE) The code has been running for half an hour, but I still don't see any results: library(lhs)
OPTIMAL LATIN HYPERCUBE SAMPLING PC
My pc is an HP Z820 workstation with 12 cores, 32 Mb RAM, Windows 7 64 bit, and I'm running Microsoft R Open which is a multicore version of R. I'm trying to generate an optimized LHS (Latin Hypercube Sampling) design in R, with sample size N = 400 and d = 7 variables, but it's taking forever.
