Spacemesh’s Tomer Afek warns that the global acceptance of crypt

Spacemesh’s Tomer Afek warns that the global acceptance of crypto is not progressing well and the reputation of crypto in the Western world is declining. Spacemesh aims to make cryptocurrencies more accessible to young and underprivileged communities through its innovative consensus protocol.

相关推荐

封面图片

《蓝途王子 官方英文(Blue Prince)》

《蓝途王子 官方英文(Blue Prince)》 简介:Blue Prince is an engaging adventure game that takes players on a mystical journey through a vibrant world filled with puzzles and hidden secrets. With its captivating storyline and stunning visuals, it offers a unique experience for both casual and hardcore gamers alike. 亮点:Immersive gameplay, beautiful art style, and clever puzzles make this a standout title. 标签:#Adventure #Puzzle #Fantasy #BluePrince #PCConsole 更新日期:2025-04-29 07:05:14 链接:

封面图片

Claude 2 is here!

Claude 2 is here! Hi there, The wait is over! Our latest model, Claude 2, is now available through our API. Read more here. We’ve heard from users that Claude 2 is easy to converse with, better at explaining its thinking, much less likely to produce harmful outputs, and has a longer memory. We’ve also made significant improvements on coding, math, and reasoning compared to our previous models. Access the new model As an API user, you can continue using Console as your workstation for optimizing prompts, managing your keys, and accessing developer resources. You're able to call Claude 2 and benefit from its performance improvements today. As an AI enthusiast, anyone in the US and UK can now use the public-facing chat experience at claude.ai as their day-to-day AI assistant. Join our Discord community We’ve also just launched our official Anthropic Discord server where you can chat about Claude 2, discover resources for building with our API, explore prompt ideas, provide feedback including new feature requests, and showcase your project. Accept your invite here! What builders are saying AI content creation platform Jasper has already integrated Claude 2 to help its customers break through writer's block and adapt content to different formats and languages. "We are really happy to be among the first to offer Claude 2 to our customers, bringing enhanced semantics, up-to-date knowledge training, improved reasoning for complex prompts, and the ability to effortlessly remix existing content with a 3X larger context window," said Greg Larson, VP of engineering at Jasper. "We are proud to help our customers stay ahead of the curve through partnerships like this one with Anthropic." AI coding platform Sourcegraph has paired Claude 2 with its code graph to power the AI assistant, Cody. The assistant answers technical questions, and generates code within its text editor. “When it comes to AI coding, devs need fast and reliable access to context about their unique codebase and a powerful LLM with a large context window and strong general reasoning capabilities,” says Quinn Slack, CEO & Co-founder of Sourcegraph. “The slowest and most frustrating parts of the dev workflow are becoming faster and more enjoyable. Thanks to Claude 2, Cody’s helping more devs build more software that pushes the world forward.” We can’t wait to see what you build with our latest model! Warmly, The Anthropic Team

封面图片

IMF’s Gita Gopinath explains that there is a global consensus ag

IMF’s Gita Gopinath explains that there is a global consensus against regulatory measures for cryptocurrencies and there is no discussion of banning them at the G20 Summit.

封面图片

When Hong Kong’s government tries to silence a song, the world s

When Hong Kong’s government tries to silence a song, the world should listen If the court agrees, the companies should reply with «no.» Meta, Google and their peers admittedly find themselves in a tortured position. In Hong Kong, this dispute is itself more evidence that the city is approaching a tipping point that the «one country, two systems» model affording Hong Kong its autonomy from mainland China is atrophying. Two of the remaining bulwarks against autocracy in the nation were judicial independence (which China has disavowed) and a free internet (in contrast to China’s Great Firewall); a victory for the government in this case would mean the diminishment of both. This, in turn, would be a signal to companies who prefer to do business without betraying their values that the city is no longer a friendly hub for foreign investment. #愿荣光归香港 #禁制令 #GloryToHongKong #香港国歌

封面图片

Minus 是一个你一共只能发一百条消息的社交媒体(回复次数不限),作者想看看这种情况下社交会发生什么变化。

Minus 是一个你一共只能发一百条消息的社交媒体(回复次数不限),作者想看看这种情况下社交会发生什么变化。 Hacker News 的评论区有很多人提出了其他的有意思的媒体形式,比如不能发人脸的 Instagram、消息发出后两周才公开的微博,还有全是机器人的 Facebook。 还有人贴了一首诗: In an effort to get people to look into each other’s eyes more, and also to appease the mutes, the government has decided to allot each person exactly one hundred and sixty-seven words, per day. When the phone rings, I put it to my ear without saying hello. In the restaurant I point at chicken noodle soup. I am adjusting well to the new way. Late at night, I call my long distance lover, proudly say I only used fifty-nine today. I saved the rest for you. When she doesn’t respond, I know she’s used up all her words, so I slowly whisper I love you thirty-two and a third times. After that, we just sit on the line and listen to each other breathe. Jeffrey McDaniel, “The Quiet World”

封面图片

(注:神经突触与神经元的动作电位触发时间的先后关系,决定了它们连接强度是增强还是减弱。如果突触的激活时序领先于神经元的动作电位,

(注:神经突触与神经元的动作电位触发时间的先后关系,决定了它们连接强度是增强还是减弱。如果突触的激活时序领先于神经元的动作电位,那么该连接获得强化;如果突触的激活时序滞后于神经元的动作电位,那么该连接获得削弱。It's a particular learning rule that uses Spike timing to figure out how to to determine how to update the synapses. So it's kind of like if the synaptic fires into the neuron before the neuron fires, then it strengthens the synapse. And if the signals fire into the neurons shortly after the neuron fired, then it weakens the synapse.) 神经网络另一个重要的点在于loss函数的提出,它为深度学习提供了可行的训练方法。很有趣的一点是,在现实世界中,我们并没有看到对应loss函数的东西 - 进化论是以loss的方式来迭代的吗?经济系统或社会系统存在loss吗?似乎都不是。 2. 神经网络的本质 Ilya认为,大脑也好,大模型也好,本质上都是把知识压缩到一个高维的隐空间当中。每一个新的观测数据到来的时候,它就会通过连接来更新隐空间中的一些参数。知识就存储在这些连接的权重里。(I guess what is a recurring role that you have a neural network which maintains a high dimensional, hidden state, and then within observation arrives. It updates its high dimensional, hidden state through its connections in some way. You could say the knowledge is stored in the connections.)压缩的过程有点类似于人类的记忆和遗忘过程,你忘掉了绝大部分没用的信息,而只是记住了那些有用的,并且将它们整合记忆。 压缩的过程就是“寻找最小回路”(search for small circuits)的过程。在数学上,有一种理论是“最短描述长度”原则,即如果你能够找到能够产生所需数据的最小程序,那么你就能够用这个程序做出最好的预测。(If you can find the shortest program that outputs the data in your disposal, then you will be able to use it to make the best prediction possible.)这是数学上可以被证明的。但“最短描述长度”原则是一个理论原则,在实践中很难准确实现。所以在实践中,针对给定的数据集,我们只能使用神经网络找到“尽量短小”的回路。因此,可以将神经网络的训练过程理解为,慢慢将训练数据集里的信息熵迁移到神经网络的参数中,而最终沉淀下来的这些回路刚好不算太大。(If you imaine the training process of a neural network as you slowly transmit entropy from the data set to the parameters, then somehow the amount of information in the weights ends up being not very large, which would explain why the general is so well.) 如果你能高效压缩信息,那么你一定已经得到知识了。GPT已经是一个世界模型了,it knows all the intricacies。尽管你做的看似只是predict the next word这么简单的事情,但这只是优化手段而已。 自然语言是最好的latent space,而且是最容易做alignment的latent space。 3. Ilya研究生涯中的两个重要时刻。 第一个时刻,是2012年做AlexNet,Alex Krjevsky用GPU来编写足够快的卷积程序,让CNN训练变得超级快,拉开了CV时代的序幕。这是Ilya的顿悟时刻,觉得神经网络这条路是能走通的。 第二个时刻,Ilya对大模型的信心来自于早年团队的一个发现。当时,团队训练一个LSTM模型来预测Amazon评论中的下一个character,当参数从500到4000的时候,就会出现一个专门的neuron来表达评论的sentiment是正面还是负面。于是,团队猜测,当模型足够大、参数足够多的时候,句法已经被充分表达了(run out of syntax to model),多余的参数开始学会捕捉语义信息。这说明了通过“预测下一个字”的训练方法,可以让模型学到更多隐藏的信息。 4. 关于多模态。 多模态是有用的,尤其是视觉。人类大脑皮层中三分之一都用来处理视觉,缺少了视觉的神经网络作用会相当有限。 人类更多是从图像而不是语言中学习的。人类一生只会听到大概10亿个词,这个数据量是非常有限的,而更多的数据来自于视觉。 很多时候,从视觉学习比从文本学习更容易。例如颜色,尽管通过文字也可以学到颜色之间的关联,比如红色和橙色更近,和蓝色更远,但通过视觉来学习要快得多。 5. AI有逻辑吗?有意识吗? AI当然有逻辑,要不为什么AlphaGo和AlphaZero在最需要逻辑推理能力的围棋游戏中击败了人类? 如何真正说明AI有逻辑推理能力?证明真正困难的定理,写复杂的代码,用创新方法解决开放性问题。如果AI能够证明一个未经证实的定理,那么这个理由就很难辩驳。 如何判断AI是否有意识?做这样一个实验,假如未来人工智能的训练可以从零开始,通过更小的数据集来完成,那么我们可以非常小心地对训练数据进行清洗,确保数据集中不包含任何关于意识的内容,如果系统在训练中需要人类的反馈,在互动中也要非常谨慎,确保不提到任何关于意识的概念。等训练结束的时候,你和AI一起聊天,这时你告诉他关于意识的事情,你向他描述之后,想象一下,如果这个人工智能接着说,”哦,我的上帝,我一直有同样的感觉,但我不知道如何表达它“,这时就可以认为人工智能有意识了。 6. 开源 vs 闭源。 如果模型的能力不强,那么开源是一件伟大的事情。如果模型的能力过强,那么开源就变得危险。尽管目前GPT4模型的能力还算不上”过分强大“,但已经能够看到这个趋势,所以闭源是合理的。(类似于核武器?) 当然,现阶段闭源更重要的原因是商业竞争(而不是安全,Ilya的原话)。 7. 更大的模型一定会带来更好的结果。(Of course the larger neuron nets will be better.) 前些年扩大规模很容易是因为有很多计算资源都没有被充分利用,一旦重新部署过之后就会快速取得进展。但现在规模到达了某种瓶颈,算力的扩张速度变慢了。I expect deploying to continue to make progress in art from other places. The deploying stack is quite deep and I expect that there will be improvements in many layers of the stack and together they will still lead to progress being very robust. 我预期我们将发现deep learning中很多尚未被发现的新属性,而这些新属性的应用将会让模型的效果变得更好。5-10年之后的模型能力一定会远远强过现在的模型。 附三个访谈的链接: 2020年5月 Lex Fridman AI Podcast 2023年3月 黄仁勋 CEO 与 OpenAI 联合创始人及首席科学家 Ilya Sutskever 关于 AI 及 ChatGPT 的对话 2023年4月 OpenAI联合创始人首席科学家AI Ilya Sutskever斯坦福大学内部演讲

🔍 发送关键词来寻找群组、频道或视频。

启动SOSO机器人