Large volumes of waste water and low levels of metals mean that highly effective materials such as nanoparticles or nanostructures need to be employed to remove the dissolved metals from the stream. The challenge in using nanotechnology lies in the recovery of the particles, as filtration proves ineffective this article discusses use of magnetic composites as a potential solution to this challenge.Ībstract. Composites of magnetite and maghemite with a nanostructured calcium silicate hydrate are generated and used in the sorption of copper from solution. The superparamagnetic components allow use of high gradient separation thereby circumventing the time-consuming recovery of the silicate by filtration. The sorption capacity of the composites is comparable to that of the pure silicate. The ideal ratio of iron oxide to calcium silicate hydrate is identified to be 10 wt % of magnetite or maghemite.Cross-validation is a process by which a method that works for one sample of a population is checked for validity by applying the method to another sample from the same population. Surprisingly, many statisticians see cross-validation as something data miners do, but not a core statistical technique. It might be helpful to summarize the role of cross-validation in statistics.Ĭross-validation is primarily a way of measuring the predictive performance of a statistical model.Įvery statistician knows that the model fit statistics are not a good guide to how well a model will predict: high R2 does not necessarily mean a good model. It is easy to over-fit the data by including too many degrees of freedom and so inflate R2 and other fit statistics. For example, in a simple polynomial regression I can just keep adding higher order terms and so get better and better fits to the data. But the predictions from the model on new data will usually get worse as higher order terms are added.Ĭross validation is a model evaluation method that is better than residuals. The problem with residual evaluations is that they do not give an indication of how well the learner will do when it is asked to make new predictions for data it has not already seen. One way to overcome this problem is to not use the entire data set when training a learner. Some of the data is removed before training begins. Then when training is done, the data that was removed can be used to test the performance of the learned model on ``new'' data. This is the basic idea for a whole class of model evaluation methods called cross validation. ╔ The holdout method is the simplest kind of cross validation. The data set is separated into two sets, called the training set and the testing set. The function approximator fits a function using the training set only. Then the function approximator is asked to predict the output values for the data in the testing set (it has never seen these output values before). The errors it makes are accumulated as before to give the mean absolute test set error, which is used to evaluate the model. The advantage of this method is that it is usually preferable to the residual method and takes no longer to compute.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |