blaze

有了人体器官的透明图谱,3D打印人体器官的时代将到来?

社会主义新天地 提交于 2020-04-28 19:36:41
云栖号资讯:【 点击查看更多行业资讯 】 在这里您可以找到不同行业的第一手的上云资讯,还在等什么,快来! 编者按:本文来自微信公众号“造就”(ID:xingshu100),作者:黄一成,36氪经授权发布。 研究人员首次成功地使完整的人体器官透明化。 他们利用显微成像技术,揭示了细胞水平上透明器官的复杂底层结构。由此生成的器官图可用作3D生物打印技术的模板。将来,这项技术可能为许多有需要的病人按照他们的需求生产人工器官。 这项研究的结果发表在《细胞》杂志上。 透明的人脑影像图 在生物医学研究中,“眼见为实”。由于缺乏在细胞水平上对器官成像的技术,破译人体器官的结构复杂性一直是一个重大挑战。在此前,研究人员已经获得了透明小鼠器官在细胞水平上的完整3D视图,但该方法并不适用于人体器官。 由于不溶性分子的积累,以及组织中胶原蛋白的积累,人体器官特别“僵硬”。因此,使小鼠器官透明的传统洗涤剂对人体器官,尤其是成人器官不起作用。 “我们必须彻底改变方法,从零开始,找到能够使人体器官透明化的新化学物质,”德国亥姆霍兹联合会(Helmholtz Zentrum München)的博士生、这项研究的第一作者Shan Zhao说。 经过反复试验,研究小组发现一种叫做CHAPS的洗涤剂可以在“僵硬”的人体器官中制造出小孔。由此一来,CHAPS允许其他的解决方案深入数厘米厚的人体器官,并将其转换为透明结构

How to provide user defined function for python blaze with sqlite backend?

醉酒当歌 提交于 2019-12-23 10:57:50
问题 I connect to sqlite database in Blaze using df = bz.Data("sqlite:///<mydatabase>) everything works fine but I do not know how to provide user-defined functions in my interaction with df. I have a column called IP in df which is text containing IP addresses. I also have a function toSubnet (x, y) which takes in an IP address (x) in text format and return its /y subnet. For example: out = toSubnet('1.1.1.1',24) out 1.1.1.0/24 Now if I want to map all IPs to their /14 subnets, I use: df.IP.map

How do you install the blaze module (Continuum analytics) in Python?

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-21 05:51:09
问题 How do you install blaze natively (i.e., not in a virtual environment) in Python? The only instructions I find are on in the package's doc (see link), and here, in a virtual environment. 回答1: I didn't find any instructions anywhere online for this, but it's relatively straightforward. About my platform/tools I used: Mac OSX (Mountain Lion) Python 2.7.3 homebrew pip It looks like you might need to install Cython, not sure as I already had it installed. You can do this with pip install Cython .

How to read a Parquet file into Pandas DataFrame?

女生的网名这么多〃 提交于 2019-12-17 17:38:46
问题 How to read a modestly sized Parquet data-set into an in-memory Pandas DataFrame without setting up a cluster computing infrastructure such as Hadoop or Spark? This is only a moderate amount of data that I would like to read in-memory with a simple Python script on a laptop. The data does not reside on HDFS. It is either on the local file system or possibly in S3. I do not want to spin up and configure other services like Hadoop, Hive or Spark. I thought Blaze/Odo would have made this

pydata blaze: does it allow parallel processing or not?

帅比萌擦擦* 提交于 2019-12-10 03:14:48
问题 I am looking to parallelise numpy or pandas operations. For this I have been looking into pydata's blaze. My understanding was that seemless parallelisation was its major selling point. Unfortunately I have been unable to find an operation that runs on more than one core. Is parallel processing in blaze available yet or currently only a stated aim? Am I doing something wrong? I am using blaze v0.6.5. Example of one function I was hoping to parallelise: (deduplication of a pytables column too

基于 Git Namespace 的存储库快照方案

淺唱寂寞╮ 提交于 2019-12-09 20:43:47
前言 Git 是一种分布式的版本控制系统,分布式版本控制系统的一大特性就是远程存储库和本地存储库都包含存储库的完整数据。 而集中式的版本控制系统只有在中心服务器上才会包含存储库完整的数据,本地所谓的存储库只是远程服务器特定版本的 checkout 。当中心服务器故障后,如果没有备份服务器,那么集中式的版本控制系统存储库的数据绝大部分就会被丢失。这很容易得出分布式版本控制系统的代码要必集中式的版本控制系统更加安全。 但是,安全并不是绝对的,尤其当 Git 被越来越多的人使用后,用户也会需要 Git 吸收集中式版本控制系统的特性来改进用户体验,这种情形下,Git 分布式版本控制系统的安全性也就面临挑战。终端用户获取的不是完整的数据,为了保证存储库的安全仍然需要备份或者镜像远程服务器上的存储库。(用户可以使用浅表克隆,单分支克隆或者使用 git vfs(GVFS) 之类的技术加快 git 访问。) Git 给开发者非常大的自由,git 可以修改 commit 重新提交,也可以强制推送<sup>1</sup>引用到远程服务器,覆盖特定的引用,不合理的使用强制推送是非常危险的,这很容易造成代码丢失,对于企业存储库来说,合理的快照能够代码丢失后减小代码资产的损失。(但这并不是说绝对禁止强制推送<sup>2</sup>) 在 Gitee 提供了企业版后,我们也经常接收到用户对于代码资产安全的反馈

Using odo to migrate data to SQL

a 夏天 提交于 2019-12-07 04:37:23
问题 I have a large 3 GB CSV file, and I'd like to use Blaze to investigate the data, select down to the data I'm interesting in analyzing, with the eventual goal to migrate that data into a suitable computational backend such as SQlite, PostgresSQL etc. I can get that data into Blaze and work on it fine, but this is the part I'm having trouble with: db = odo(bdata, 'sqlite:///report.db::report')` I'm not sure how to properly create a db file to open with sqlite. 回答1: You can go directly from CSV

DARPA出手, Python有望成为大数据分析语言

我们两清 提交于 2019-12-06 02:39:45
当前, 在大数据领域, Java成为了当仁不让的必修语言。原因就是大数据平台必备的Hadoop分布式管理平台需要使用Java,但是这种局面有可能被DARPA改变。 IT经理网在“ 大数据成为美国军队的’新型武器’ ” 一文中,报道过美国国防部下属的“国防高级研究项目署”(DARPA)通过XDATA项目进行大数据投资的情况。 最近, DARPA通过XDATA项目的基金投资Continuum Analytics,帮助其开发Python在大数据领域的数据处理和可视化能力。 XDATA这个计划的目的, 就是研究在对非完美及非完整数据集,进行海量数据处理和数据可视化的算法。 XDATA项目基金总共有1亿美元, 这次DARPA对Continuum Analytics的投资共计3百万美元。 Python是一个非常流行的编程语言, 无论在网络程序员中(比如Google的相当多的产品就是用Python编写的,Python也是豆瓣的主要开发语言), 还是在科学计算领域, Python都有很广泛的应用。而Continuum Analytics的目的, 就是要进行下一代数据分析工具的开发, 使得Python在数据分析领域的应用, 如同在科学, 工程和规模化计算方面那样强大。 Continuum Analytics的旗舰产品 Anaconda 是一个基于Disco平台的数据管理,分析和可视化的工具。而

Using odo to migrate data to SQL

吃可爱长大的小学妹 提交于 2019-12-05 11:34:21
I have a large 3 GB CSV file, and I'd like to use Blaze to investigate the data, select down to the data I'm interesting in analyzing, with the eventual goal to migrate that data into a suitable computational backend such as SQlite, PostgresSQL etc. I can get that data into Blaze and work on it fine, but this is the part I'm having trouble with: db = odo(bdata, 'sqlite:///report.db::report')` I'm not sure how to properly create a db file to open with sqlite. You can go directly from CSV to sqlite using the directions listed here. http://odo.pydata.org/en/latest/perf.html?highlight=sqlite#csv

pydata blaze: does it allow parallel processing or not?

自作多情 提交于 2019-12-05 05:15:09
I am looking to parallelise numpy or pandas operations. For this I have been looking into pydata's blaze . My understanding was that seemless parallelisation was its major selling point. Unfortunately I have been unable to find an operation that runs on more than one core. Is parallel processing in blaze available yet or currently only a stated aim? Am I doing something wrong? I am using blaze v0.6.5. Example of one function I was hoping to parallelise: (deduplication of a pytables column too large to fit in memory) import pandas as pd import blaze as bz def f1(): counter = 0 groups = pd