This post was originally published March 4, 2019 on Medium.
Participants: Mehdi Zerouali (Sigma Prime), Diederik Loerakker / Protolambda (independent developer), Zak Cole (Whiteblock), Justin Drake (EF), Carl Beekhuizen (EF), Yannick Luhn(Brainbot), Greg Markou (Chainsafe), Lane Rettig (EWASM)
Facilitated by Lane Rettig and María Paula Fernández.
Note: the dialogue has been modified and condensed for easier readability — this is meant as a rough transcript. Please reach out to the people mentioned on gitter if you have more specific questions.
Justin- Zero hardforks are needed. The only prerequisite would be to set up the deposit contract on 1.0. However, a hard fork on 1.0 could add the functionality of finality taken from 2.0. This allows issuance to be drastically reduced (factor of 10 or 20, near the security of Ethereum Classic). Every 6 minutes the chain is finalised, conceivably could come to rely only on transaction fees. Other benefit from finalisation is fungibility of the ETH token, two-way transfer between the two chains.
Finalisation of blocks is an independent effort from 2.0. It is key that 1.0 clients are aware of the 2.0 chain — this is either as a full node of the beacon chain OR by being a light client of both. It will take time for this to occur, perhaps near Phase 1, sometime next year.
Greg- It might be too early. Could drain resources from both researchers and dapp developers to explore this now.
Zak- Agrees it is too early, the spec isn’t entirely complete yet. No defined function for peering / communication mechanisms. Network layer needs to be solid before looking at application layer.
Mehdi- The only thing available soon will be testnets. Lighthouse will have a testnet within the next few weeks. It’s too early to guide dapp developers on what their dapp might look like on 2.0: the EVM is deprecated by EWASM. Developers should look into EWASM.
Justin- Deposits can be between 1–32 ether, these are locked in the deposit address (burner address). Within the beacon chain if you are not actively validating you can transfer between addresses (perhaps for arbitrage). This is purely a system chain, with no user transactions.
Justin- Initially the beacon chain will have very limited throughput at 16 txs per block. Wouldn’t be a great mechanism to abstract fees. One research idea is a single unified Plasma chain to pay tx fees on any shard. This removes need for ether dust on every shard to pay for transactions. This problem of fee abstraction is more pressing in 2.0 than 1.0 (note, I think there may be a slight disconnect between the question and answer. Typically this refers to paying system transaction fees in something other than the base token, e.g. tokens paying tx fees in the place ether).
Carl- The beacon chain can be considered a state machine: not designed for arbitrary computation, with a finite actions and systems to update. Not designed for general purpose computation.
Carl- Yes. We now have a secure random beacon that can be used across the chain by dapps. Unbiasable and with the same properties that are used in consensus.
Justin- If you have a 1.0 contract with a long projected life, the 1.0 chain will most likely live on for decades. However, it’s important that it remains sustainable, issuance needs to not be that high. This can be accomplished through 2.0 finalisation, might be able to live on transaction fees alone. Other approach would be to embed 1.0 chain as a contract within 2.0 — this seems ambitious as an engineering question. Is it worth the time and effort?
Justin- At the very least you would be able to transfer Ether between both chains.
Carl- Merkel roots of data from ETH 1.0 chain can be included on 2.0 chain (proving accounts).
Justin- Issuance can be reduced because 1.0 is finalised by 2.0. At most it could be a 6 minute reorg from a bad miner. Issuance should be reduced because it is expensive for investors to get continuously diluted — ideally would be .5% for the entire Ethereum system. Secondly, it is environmentally expensive. With POS you get better security for a cheaper price.
Greg- We live in a multi-chain future. If dapps want to move to other chains they will, but their users might not. Layer 2 solutions will likely fill that deficiency before forcing dapps to move to other chains.
Carl- Goes back to the idea of using merkel roots to include data, same conditions apply if 2.0 is finalising 1.0. Long term it depends on what happens with 1.0 (1.x, EWASM), if there is a WASM interpreter running in a shard then data will be more accessible.
Zak- Keep building / developing as you are, but anticipate that you may have to restart / redeploy on 2.0. Will present fewest vulnerabilities and security issues.
Diederik- If the 1.0 chain keeps running, you wouldn’t want to run on both at the same time. May want to stop support on 1.0, take state roots of the dapp and reinitialise on 2.0. Doesn’t need to be defined in the protocol.
Carl- It will be your choice, each will have different gas markets which will lead to economic load balancing.
Lane- Considered baking the load balancing into the protocol but it was incredibly complicated. Population density / cost of living analogy. Higher densities might have network effects but there are also costs associated with it. Yanking will also allow for asynchronous contract movement between shards.
Justin- Cross shard calls is a design space with tradeoffs, no single answer, like Plasma. People will try different things and standards will emerge. One thing to consider is that there will be basic infrastructure at the protocol layer in the form of crosslinks. This is a way for every shard to have light client access to other shards. Most likely there will also be basic asynchronous cross-shard calls. This works by having a special contract on each shard that burns ether sent to it. The burn generates a receipt which can then be consumed on the shard on the other end of the transaction as soon as the sending shard has been finalised through the beacon chain.
Justin- Latency from basic infrastructure would be one epoch, 6 minutes. To get anything faster you can also experiment with optimistic approaches. Assume that the checkpoint will be finalised but don’t wait for it, and then layer your next actions on top of that. If for some exceptional reason this does not occur there would be a revert mechanism built in. The design space opens up here — you can trade off certainty of execution vs. latency.
Lane- Reddit AMAs, had a 2.0 AMA recently.
Justin- One of the biggest concerns for 2.0 will be storage fees. Good news is that there is movement in the form of 1.x which will help give feedback. This also applies to the Plasma research, state channels, etc. Getting close to world computer will require these Layer 2 solutions.
Lane- None of the 2.0 work requires a 1.0 hard fork, however, there will be experimental efforts like storage fees that will likely take place via a 1.0 HF. EWASM is also important on this front.
Justin- We need VDFs to get a very high quality of randomness. RANDAO gets a pretty good version but these together get basically perfect randomness. ELI5: There are ~100 people in a dark room, with a die in the middle. They are asked to roll the die, however some do not participate honestly because they are asleep or malicious. They cannot see the die. If there is only one honest person rolling the die, then the end result is no one can know what the final value will be. VDFs provide a mechanism to keep the lights off for a preset period of time and not before. It’s an artificial delay mechanism for seeing the unique answer that will only be seen at a future time. Malicious actors are constrained by physics.
Justin- Originally the idea was to harden the consensus layer because RANDAO is vulnerable to two attacks. In RANDAO, the last entity invited to reveal in an epoch can choose to not reveal — this biases the randomness by one bit. In the “last revealer attack” if they somehow manage to control the last 3 slots, then they can control 3 bits of attack surface (8 random number possibilities). If randomness can be biased in one epoch, malicious actors can use that bias even more in the next epoch —meaning you push your slot closer to the reveal period. This is called the “amplification attack”: if you control 35% of the stake, you would be able to put yourself in the last revealer position 50% of the time.
To address these biases at the protocol layer, we have security margins. One way is to have stronger assumptions, ie, that people are honest. If there were better randomness, your assumption that 70% of the network is honest can be reduced to 66%.
A second, more tangible value for strong randomness (VDF) is at the application layer. Would be incredibly important for something like a billion $ lottery, where one biased bit of randomness increases the odds of payout for large players.
Mehdi- Biweekly implementer calls help to coordinate the teams on development. In terms of the protocol layer, there are also test vectors set by the Ethereum Foundation which each implementation is on par with the specification. Will hopefully see a multi-client testnets in a few weeks time.
Mehdi- Funding. This problem is not unique to the Ethereum space, affects all of open source. Need to find sustainable business models. Another challenge is balancing between their current work and staying up to date with the spec.
Mehdi- The naiive implementation of the spec would not work, would require a lot of optimisation. This is a requirement on top of the spec implementation.
Diederik- There is a tradeoff between optimising prematurely or researching for a better solution.
Carl- From a spec writing perspective we have optimised for readability. It should be easy to understand. The research team has been moving away from a plaintext script towards python executables (exe). They want clients to come up with different optimisations to solve problems in different ways. Trying to avoid all clients failing in the same way and possibly missing better optimisations. If everyone is focused on different methods of achieving the naiive spec this is probably better for the health of the ecosystem in the long run.
Greg- We need client competition to a certain degree. Client specific optimisations on top of a barebones spec leads to different tradeoffs.
Mehdi- Simplicity was certainly a design goal for Ethereum Serenity. The spec does not necessarily need to have all client optimisations built into it. He views the client as a public good.
Lane- It’s part of the Ethereum ethos to have multiple implementations (as compared to Zcash or Bitcoin). Important to point out that there have been consensus failures, it’s a useful method to catch bugs.
Lane- There might be diminishing returns.
Diederik- In terms of ETH 2.0 it’s important that large groups of validators do not fail at the same time if their client has the same bug. Diversity is healthy for the validator ecosystem.
Zak- Language differences play a part. Different clients can be more modular depending on the usecase. There is a lack of standards / specifications for what clients are required to do — will lead to healthy competition.
Greg- Diminishing returns are real, probably already. First, the spec is not quite finished and bugs are being found. Case in point, being able to transfer on the Beacon Chain activates validators and then they are being slashed because they aren’t aware. Second, readability is huge. The spec is complex, not everyone knows Rust. Third, contributions to upstream libraries. All the teams need libp2p so all each language is completing what they need, leading to a more unified feature set. (Looks at camera: Dean, we don’t need a client in swift)
Lane- Readability is important. The yellow paper is challenging to understand, he finds the Trinity source code easier. Reading the client implementation might be easier than digesting math for developers. There is also other types of experimentation going on, e.g. business models.
Greg- Happens in gitter or side communications between implementers. The researchers are doing an incredible job letting people know what’s up, Danny especially. We’re still in the research phase, though very close to finalising. Additional formal coordination isn’t necessary quite yet, perhaps when the cross-client testnets are closer it might be more important.
Mehdi- Agree, Danny is doing an amazing job. It’s also difficult in a decentralised environment because no one formally told him to do it. Wouldn’t work in a commercial context, a researcher acting as a project manager would be unheard of.
Greg- Implementers call standup gives a good enough of an understanding between client teams. With a cross-client testnet it will come down to implementers chatting with each other.
Zak- A working multi-client testnet has been challenging. Need conformance tests, performance metrics, functional docker files. Might be good to have Cat Herders coordinate a multi-client testnet, should not be the responsibility of the client developers. Please try to move away from writing everything to memory.
Diederik- Need to shift from single to multi-client testnets. This is difficult because there is still undefined pseudo-code in the spec. There are plans to formalise these parts of the spec. They want big picture test vectors to make sure clients generally confirm state transitions and also agree on networking.
Justin- The roadmap is bigger than the spec, which only refers to Phase 0. The roadmap has evolved significantly over the years, up until recently it was primarily driven by the research team (Vitalik and Vlad mostly). The roadmap is changing in relatively smaller ways. One example is the addition of transfers — will the BETH token be fungible, will there be tax implications? It was a low hanging change. Phase 0, 1 and 2 are pretty well defined — what come after is more blurry. They would like to have a quantum secure chain after that, part of the reason they gave the grant to Starkware. Starks are probably powerful enough to handle all of those problems, including signature aggregation (replacing BLS signatures), can be useful for VDFs. Another non-quantum secure part is randomness — whereas right now they use BLS signatures, there is a new design requirement for ETH 2.0: it should be friendly to MPCs (n of m staking pools).
Diederik- In the short term, from Phase 0 to 1 there is room for concurrent research on how shards can be upgraded individually.
Mehdi- We had to throw away a lot of Rust code but we learned a lot, it’s fine. One thing they would have done differently is to maybe wait until a release candidate was ready. Much better now with releases and a change log.
Greg- Disagrees with the Kyokan report claim that there isn’t input from implementers to researchers, he can still ask questions.
Zak- Thinks there should be more formal verification. Should be a feedback between formal verification and the eventual spec changes. Formal verification is is like establishing a blueprint for a building. Should occur before construction begins.
Zak- There is not much interaction or interest in 2.0. They are focused on enterprise implementations, 2.0 seems too far into the future. Too early to say what will happen in the future.
Greg- For dapp devs they can still contribute to EWASM issues, join some gitters, github issues. Just because you don’t write code doesn’t mean you can’t contribute.
Lane- All avenues are developer friendly, perhaps not for non-developers. Cat herders are a great way to get involved if you are non-technical. Build a design ring if you are a designer.
Hopefully people find this helpful. Thanks to Fluence for livestreaming and Lane / MP for facilitating. If there are any errors or inconsistencies, comment or message me. I can also be found here, looking for work.