lens, align.

Lang ist Die Zeit, es ereignet sich aber Das Wahre.

Opus_XIX.

2016-11-11 23:33:41 | Science News

(iPhone6s: Camera.)

WHY ARE WE HERE ?


□ 表層だけが綺麗なものも、深層が複雑なものも、それ自体で在るわけではない。あまねく事象は主観の相反であり、自らの身を何処に置くかで決定論的に作用する。物事の評価軸とは、自らと対象の構成要素のどの層にコミットするかで偶発的に発展する。浅いも深いも無く、解けない知恵の輪のどこを動かすか

それらは同時に、何を無価値とし、何を有意味とするかの闘争に、それ自身が作用する依拠を与える。理由があるのではなく、機序があるのだ。


□ 歌も政治も文学も、それぞれのコンテクストに属する、ある種の『共同体幻想』に取り憑かれることを前提とした振る舞いだ。そして文学は「文学らしき」を、歌は「歌らしき」を、政治は「本物らしき」を相克としていたはずだった。いつからだろう、それ自身が「らしき」ものに成り替わっていたのは。


□ 歴史的な絵画や美術品、映画など、あらゆる創作は、それ自体が死者の目を通した光の反射であり、過去の光景を映す鏡だ。だとすれば、生きている他者と死者の境界は、さほど克明ではないのかもしれない。私たちの地獄は、人が愛を謳う理由は説明できても、私たちが恋に落ちた謎が解けない処にある。






□ Categorizing Ideas about Systematics: Alternative Trees of Trees & Related Representations

>> http://biorxiv.org/content/biorxiv/early/2016/10/06/079483.full.pdf

Clusters that appeared fairly distinct in the UPGMA dendrogram, proved to be less clear cut in the networks and the ordinations.




□ Adaptive Somatic Mutations Calls with Deep Learning and Semi-Simulated Data:

>> http://biorxiv.org/content/early/2016/10/04/079087




□ Dynamical patterns of coexisting strategies in a hybrid discrete-continuum spatial evolutionary game model:

>> http://biorxiv.org/content/early/2016/10/05/079434

the dynamics of players who move in a square domain Ω ≡ [−l, l] × [−l, l]. complement numerical simulations w/ rigorous analytical results to achieve conclusions w/ broad structural stability under parameter changes.




□ Supernova assembler for Chromium Linked-Reads: Direct determination of a single whole-genome library:

>> http://support.10xgenomics.com/de-novo-assembly/software/overview/welcome

Supernova is the only practical method for creating diploid assemblies of large genomes. The approach is to first build an assembly using read kmers (K = 48), then resolve this assembly using read pairs (to K = 200), then use barcodes to effectively resolve this assembly to K ≈ 100,000. At 56x coverage, the mean number of Linked­Reads per molecule for a human genome would thus be (1200M/2) / (106 x 10) = 60, and covering the molecule to depth (120*150) / (50,000) = 0.36x.




□ Polynomial-time tensor decompositions with sum-of-squares:

>> https://arxiv.org/pdf/1610.01980v1.pdf

a discrete probability distribution D over Rn by its probability mass function D: Rn → R such that D(x) is the probability of x under the distribution for every x ∈ Rn.




□ Is this adaption of the Gillespie algorithm using Michaelis constants justifiable?:

>> http://biology.stackexchange.com/questions/52304/is-this-adaption-of-the-gillespie-algorithm-using-michaelis-constants-justifiabl

to run a discrete simulation of a biological system as it can be done, e.g., using the Gillespie algorithm.




□ Bayesian Analysis of Evolutionary Divergence with Genomic Data Under Diverse Demographic Models:

>> http://biorxiv.org/content/biorxiv/early/2016/10/12/080606.full.pdf

the locus-specific mutation rate model incl mutation scalars in the Markov chain state space and requires more iterations in MCMC simulation. The posterior surface to explore in MCMC simulation is the joint probability of mutation scalars and coalescent trees. an exact accounting of all possible migration path histories in the 2nd phase of the analysis, allowing for the removal of migration paths from the MCMC phase and allowing for the importance sampling approach that does not rely upon a demographic model.




□ Toward an Integration of Deep Learning and Neuroscience:

>> http://journal.frontiersin.org/article/10.3389/fncom.2016.00094/full

intelligence is enabled by many computationally specialized structures, each trained with its own developmentally regulated cost function, where both the structures and the cost functions are themselves optimized by evolution like the hyperparameters in neural networks.




□ FIDDLE: An integrative deep learning framework for functional genomic data inference: TensorflowとTorchを利用したゲノム推定。

>> http://biorxiv.org/content/biorxiv/early/2016/10/17/081380.full.pdf

the TSS-seq data can be learned accurately from different data types individually, and from their combination. FIDDLE is a generic framework that learns a unified rich representation by exploiting synergistic interactions within and across datasets.




□ torch: a scientific computing framework with wide support for machine learning algorithms for LuaJIT, and C/CUDA.

>> http://torch.ch

Torch7 Tensor defines the all powerful tensor object that provides multi-dimensional numerical arrays with type templating.

table.insert(txt, string.format('if(!hasdims && arg%d->nDimension == 1 && arg%d->size[0] == 1)', arg.i, arg.
table.insert(txt, string.format('lua_pushnumber(L, (lua_Number)(*TH%s_data(arg%d)));}', Tensor, arg.i))




□ rasbhari: Optimizing Spaced Seeds for Database Searching, Read Mapping and Alignment-Free Sequence Comparison:

>> http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005107

In alignment-free DNA sequence comparison, the number N of pattern-based matches is used to estimate phylogenetic distances. Spaced Words approach to alignment-free sequence comparison, pattern sets calculated more accurate estimates of phylogenetic distances.




□ Automating Morphological Profiling with Generic Deep Convolutional Networks*

>> http://biorxiv.org/content/biorxiv/early/2016/11/02/085118.full.pdf




□ Stochastic Variational Deep Kernel Learning:

>> https://arxiv.org/abs/1611.00336




□ iDEEP: RNA-protein binding motifs mining with a new hybrid deep learning based cross-domain knowledge integration approach

>> http://biorxiv.org/content/biorxiv/early/2016/11/03/085191.full.pdf

a multimodal deep learning framework iDeep, a hybrid framework with CNNs and DBNs, to better integrate multiple heterogeneous data sources for predicting RBP interaction sites on RNAs.

MCC = float(tp*tn-fp*fn)/(np.sqrt((tp+fp)*(tp+fn)*(tn+fp)*(tn+fn)))




□ Gyre and gimble in the proteasome:

>> http://www.pnas.org/content/early/2016/11/02/1616055113.short




□ Sequence kernel association tests for large sets of markers: tail probabilities for large quadratic forms:

>> http://biorxiv.org/content/biorxiv/early/2016/11/04/085639.full.pdf

fastSKAT can automatically alert users to the rare cases where the choice may affect results. k largest eigenvalues of a weighted genotype covariance matrix or the largest singular values of a weighted genotype matrix are extracted, and a single term based on the Satterthwaite approximation is used for the remaining eigenvalues.




□ A Decision Support System for Inbound Marketers: An Empirical Use of Latent Dirichlet Allocation Topic Model:

>> https://arxiv.org/pdf/1611.00872v1.pdf

The key inferential problem to solve for LDA is computing posterior distribution of topic hidden variables θd, zd, the first one with Dirichlet distribution, and the second one with multinomial distribution.






□ Deep learning with feature embedding for compound-protein interaction prediction:

>> http://biorxiv.org/content/early/2016/11/07/086033

A 100-dimensional vector of protein features and a 200-dimensional vector of compound features obtained from these two embedding modules are then concatenated together to form a 300-dimensional input vector to a deep neural network classifier.




□ A Canonical Neural Mechanism for Behavioral Variability:

>> http://biorxiv.org/content/biorxiv/early/2016/11/07/086074.full.pdf

the self-organization and the adaptation of sensory-motor networks through Hebbian learning and reinforcement learning mechanisms. a circuit comprising strongly recurrent neural networks, is organized in a topographic manner, is capable of driving variable behaviors.




□ WORMHOLE: Novel Least Diverged Ortholog Prediction through Machine Learning:

>> http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005182

support vector machine classifiers (SVMs) strongly improves LDO prediction over simple voting, a baseline method used in other meta-tools. WORMHOLE considers the patterns of ortholog of the 17 constituent algorithms & identifies signature patterns that correspond to likely LDOs. WORMHOLE uses the genome-wide predictions of PANTHER LDOs as a set of high-confidence examples to train machine learning classifiers.




□ Measuring the importance of annotation granularity to the detection of semantic similarity between phenotype profile

>> http://biorxiv.org/content/biorxiv/early/2016/11/08/086306.full.pdf

the computational expense of measuring semantic similarity can be prohibitive for fine-grained annotations due to an explosion in the number of classes required for reasoning when annotations draw from multiple ontologies.




□ PyBoolNet - A Python Package for the Generation, Analysis and Visualisation of Boolean networks.

>> http://dlvr.it/MZBHf4




□ A Deep Boosting Based Approach for Capturing the Sequence Binding Preferences of RNA-Binding Proteins from CLIP-Seq:

>> http://biorxiv.org/content/early/2016/11/08/086421

the pathogenic mutations near splice sites generally showed a greater extent of difference in the predicted binding scores than those pathogenic mutations randomly chosen from the COSMIC records. DeBooster can outperform the methods that take both sequence & structural information as input, incl. both GraphProt & deep learning model.




□ Revealing the vectors of cellular identity with single-cell genomics:

>> http://www.nature.com/nbt/journal/v34/n11/full/nbt.3711.html




□ On the evaluation of the fidelity of supervised classifiers in the prediction of chimeric RNAs:

>> https://biodatamining.biomedcentral.com/articles/10.1186/s13040-016-0112-6

Ensemble learning strategies demonstrated to be more robust to this classification problem, providing an average AUC performance of 95 % (ACC=94 %, Kappa=0.87 %).