Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DecentralML - grant application #1818

Merged
merged 19 commits into from
Oct 3, 2023
Merged

DecentralML - grant application #1818

merged 19 commits into from
Oct 3, 2023

Conversation

AshleyTuring
Copy link
Contributor

@AshleyTuring AshleyTuring commented Jun 22, 2023

Project Abstract

One-liner: A Polkadot protocol for decentralised federated machine learning and collective governance.

Project Description: DecentralML introduces decentralised federated machine learning (DFML), governed by collective consensus to the Polkadot ecosystem. Our goal is to provide a robust framework for AI model developers, organisations, and applications, enabling decentralised ownership of models while ensuring privacy and scalability. With node or on-device training protecting privacy, the ability to share data training between organisations (nodes), collaborative AI training, and "collective" governance controls, DecentralML may transform the field of machine learning for state-of-the-art AI models (think LLMs and more) with transparent governance.

This is a new funding application and is not in response to an RFP, nor a follow-up grant.

Grant level

  • Level 1: Up to $10,000, 2 approvals
  • Level 2: Up to $30,000, 3 approvals
  • Level 3: Unlimited, 5 approvals (for >$100k: Web3 Foundation Council approval)

Application Checklist

  • [yes] The application template has been copied and aptly renamed (project_name.md).
  • [ yes] I have read the application guidelines.
  • [ yes] Payment details have been provided (bank details via email or BTC, Ethereum (USDC/DAI) or Polkadot/Kusama (USDT) address in the application).
  • [ yes] The software delivered for this grant will be released under an open-source license specified in the application.
  • [ yes] The initial PR contains only one commit (squash and force-push if needed).
  • [ yes] The grant will only be announced once the first milestone has been accepted (see the announcement guidelines).
  • I prefer the discussion of this application to take place in a private Element/Matrix channel. My username is: @_______:matrix.org (change the homeserver if you use a different one)

@CLAassistant
Copy link

CLAassistant commented Jun 22, 2023

CLA assistant check
All committers have signed the CLA.

@AshleyTuring AshleyTuring changed the title initial commit DecentralML - decentralised federated machine learning Jun 22, 2023
@AshleyTuring AshleyTuring changed the title DecentralML - decentralised federated machine learning DecentralML - grant application Jun 22, 2023
Copy link
Collaborator

@Noc2 Noc2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the application. The Federated Learning Consensus especially sounds interesting. Do you know about the initiative by the co-founder Alex Skidanov of NEAR? See for example https://www.youtube.com/watch?v=2KM9N3jFdSk Regarding the deliveries, could you provide as many details as possible (functionality, programming language, etc.) and exclude deliveries that you are not sure how to implement these, like "Data Management." The reason is that we need to have as many details as possible. The milestone tables are the requirements of our contracts.

@Noc2 Noc2 added the changes requested The team needs to clarify a few things first. label Jun 26, 2023
@AshleyTuring
Copy link
Contributor Author

Thanks a lot for the application. The Federated Learning Consensus especially sounds interesting. Do you know about the initiative by the co-founder Alex Skidanov of NEAR? See for example https://www.youtube.com/watch?v=2KM9N3jFdSk Regarding the deliveries, could you provide as many details as possible (functionality, programming language, etc.) and exclude deliveries that you are not sure how to implement these, like "Data Management." The reason is that we need to have as many details as possible. The milestone tables are the requirements of our contracts.

@Noc2 Thank you, we will check out the NEAR initiative and flesh out the deliverables a little more. Please can you tell us what level of detail we need and the deadlines that might be involved to get into the "Q2 batch"?

@Noc2
Copy link
Collaborator

Noc2 commented Jun 26, 2023

@Noc2 Thank you, we will check out the NEAR initiative and flesh out the deliverables a little more. Please can you tell us what level of detail we need and the deadlines that might be involved to get into the "Q2 batch"?

We don't have any deadlines and accept grants on a continuous basis. Regarding the details, the more, the better ;-)

@AshleyTuring
Copy link
Contributor Author

We don't have any deadlines and accept grants on a continuous basis. Regarding the details, the more, the better ;-)

Hey @Noc2,
How are you? Hope all is well. We've updated our application based on thorough research we've conducted over the past couple weeks. We've decided to put aside the proxyModel approach for now due to the widespread use and commercial support of TensorFlow's Federated Machine Learning. We're adopting TensorFlow's approach, which allows any data scientist to transform a regular model into a federated one. Then by using DecentralML, they can take advantage of on-chain incentives and transparency. We're excited to hear your thoughts. Thanks a lot!

@AshleyTuring
Copy link
Contributor Author

Excellent work, thank you @takahser ! The fixes for formatting have been committed.

We are keen to get started and plan our next months, if you, @Noc2 or the team need anything please let us know.

Many thanks

@takahser takahser self-requested a review July 18, 2023 13:14
Copy link
Collaborator

@takahser takahser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Besides the inline comments I've got a couple of questions here:

  • what are the trust assumptions between the participants that train a common model? For example, in the federated learning paper for private medical data you shared the assumption is that the participating clinics trust each other.
  • What's the incentives for Data Attributers to act honestly and not mislabel the data with some arbitrary tag?
  • You're mentioning token staking economy - do you mean tokenomics by that?
  • In general, what are the incentives for using federated learning?
  • In general, what kind of ML-based applications would you envision your platform to be used/what kind of models would be trained?

applications/decentral_ml.md Outdated Show resolved Hide resolved
applications/decentral_ml.md Outdated Show resolved Hide resolved
applications/decentral_ml.md Outdated Show resolved Hide resolved
applications/decentral_ml.md Outdated Show resolved Hide resolved
applications/decentral_ml.md Outdated Show resolved Hide resolved
applications/decentral_ml.md Outdated Show resolved Hide resolved
applications/decentral_ml.md Outdated Show resolved Hide resolved
applications/decentral_ml.md Outdated Show resolved Hide resolved
applications/decentral_ml.md Outdated Show resolved Hide resolved
applications/decentral_ml.md Outdated Show resolved Hide resolved
@AshleyTuring
Copy link
Contributor Author

@takahser thank you for the great questions.

  1. what are the trust assumptions between the participants that train a common model? For example, in the link: https://www.nature.com/articles/s41598-022-12833-x federated learning paper for private medical data you shared the assumption is that the participating clinics trust each other.

Answer: In our approach, we're using the Tensorflow Federated Learning single entity implementation, which assumes that the model creator is the only trusted party. Unlike the federated entity approach, our method doesn't require mutual trust between all participants. While we might consider implementing the a decentralized multi-party approach in the future, it would necessitate substantial modifications to the Tensorflow FL library.

  1. What's the incentives for Data Attributers to act honestly and not mislabel the data with some arbitrary tag?

Answer: Ensuring honesty among Data Attributers can be a complex task. Initially, we'll implement a basic game theory consensus to this challenge. We plan to use the Behavioural Strategy (https://en.wikipedia.org/wiki/Strategy_pattern) software pattern, which allows us to switch implementation based on parameters passed by the Model Creator when a model is initially uploaded. This will allow us to adapt and refine the strategy according to the model objectives.

  1. You're mentioning token staking economy - do you mean tokenomics by that?

Answer: Yes, by token staking economy, we're referring to DecentralML’s tokenomics. When a model is uploaded, a certain amount of token is staked and divided currently into three pools for governance: Data Contributors, Model Engineers, and Data Annotators. This staked amount, paid by the Model Creator, is stored on-chain and used to incentivize these three groups. We also set the percentage of the pool allocated to each group, as well as the charges for downloading the model.

  1. In general, what are the incentives for using federated learning?

Answer: Users can benefit from earning a portion of the staked tokens and maintain control over their models. Additionally, the use of the model itself can also serve as an incentive such as for the model creator.

  1. In general, what kind of ML-based applications would you envision your platform to be used/what kind of models would be trained?

Answer: Great question, DecentralML leveraging TensorFlow Federated Learning, can support a wide range of applications. Typically these models are best generated from crowd source data sources. Such as G-board Google’s keyboard which predicts the next word based on gradients sent by millions of devices. The versatility and popularity of TensorFlow make it a good choice for DecentralML as a large of number of machine learning engineers use it. We anticipate this will help drive adoption and transformation in how we begin to transparently train and govern models.

@takahser
Copy link
Collaborator

takahser commented Aug 2, 2023

Answer: In our approach, we're using the Tensorflow Federated Learning single entity implementation, which assumes that the model creator is the only trusted party. Unlike the federated entity approach, our method doesn't require mutual trust between all participants. While we might consider implementing the a decentralized multi-party approach in the future, it would necessitate substantial modifications to the Tensorflow FL library.

A) If you're not using a "decentralized multi-party approach" I'm not sure what the benefits are of doing this on-chain. Why not just use one of the centralised systems instead?

Answer: Ensuring honesty among Data Attributers can be a complex task. Initially, we'll implement a basic game theory consensus to this challenge. We plan to use the Behavioural Strategy (https://en.wikipedia.org/wiki/Strategy_pattern) software pattern, which allows us to switch implementation based on parameters passed by the Model Creator when a model is initially uploaded. This will allow us to adapt and refine the strategy according to the model objectives.

B1) So in other words, the "game theory consensus" will be configurable by using the behavioural strategy?
B2) What kind of game theory would you use though? Could you elaborate on how that would look like?

Answer: Users can benefit from earning a portion of the staked tokens and maintain control over their models. Additionally, the use of the model itself can also serve as an incentive such as for the model creator.

C1) By Users do you refer to Data Contributors, Model Engineers, and Data Annotators? Or do you mean users that consume the model?
C2) Also, is there any mechanism to increase the amount of tokens in the 3 pools or is it a finite amount that the model creator pays and once the pools are drained there'd be no more incentive for the 3 groups to work on it?
C3) I'm confused that you're mentioning data contributors and how they're incentivised here, while earlier you stated that there's going to be only 1 model creator. Or do you mean by that that there is 1 model creator that doesn't bring any data, but there can be multiple data contributors? If the latter is the case, isn't it the same mechanism as described in the linked paper. Hence, it'd raise the question on how the data feed provider and their data can be trusted.

Answer: Great question, DecentralML leveraging TensorFlow Federated Learning, can support a wide range of applications. Typically these models are best generated from crowd source data sources. Such as G-board Google’s keyboard which predicts the next word based on gradients sent by millions of devices. The versatility and popularity of TensorFlow make it a good choice for DecentralML as a large of number of machine learning engineers use it. We anticipate this will help drive adoption and transformation in how we begin to transparently train and govern models.

You're mentioning G-board, an AI-enhanced keyboard here. Can you estimate how many devices you'd need for the model to become useful here? If it's in the millions (as you mention in the example), do you have a plan on how to onboard them and convince them to use your platform?

@AshleyTuring
Copy link
Contributor Author

AshleyTuring commented Aug 7, 2023

@takahser our replies to your queries are as follows:

A) If you're not using a "decentralized multi-party approach" I'm not sure what the benefits are of doing this on-chain. Why not just use one of the centralised systems instead?

Firstly, federated machine is multi-party: the model will be trained with a multi-party approach by each user training their own local model, and the model aggregating the weights of all the local models. This doesn’t require the master model to trust each model, as the gradients can be verified with cross validation. Furthermore, in terms of querying the merits of DecentralML in comparison to unspecified centralised systems just like to point out the obvious: Decentralisation provides levels of transparency, inclusivity, and accessibility that centralised systems inherently lack. Within the proposed 2-month timeline, we're not just building a project, but a cornerstone for TensorFlow developers, which are a huge segment of the machine learning community. This allows them to augment their existing work with the power of tokenisation/ governance Polkadot's decentralised technology. Recognising the value this brings to the Polkadot ecosystem, we're somewhat taken aback by your question. But to clarify any confusion here are high-level immediate benefits delivered by this grant proposal in addition to adding substrate support option for all Tensorflow developers:

a. Data Annotators: As a framework it opens the door to a diverse array of dApps, obviously an opportunity that a centralised solution cannot provide. Also, brings in transparency into training. This not only expands the Polkadot ecosystem but also paves the way for varied data annotation methods which is hugely valuable for ML in general.
b. Model Engineers: Decentralisation fosters an inclusive platform for a broad range of participants. It ushers in a new era of on-chain governance, enabling democratic decision-making processes far beyond the scope of centralised solutions.
c. Data Contributors: For the first time, contributors can use on-chain undeletable accountability, transparency and be rewarded.
d. Clients: Decentralisation encourages competition, innovation, and diversity of models available for commercial, contribution, or educational purposes. This leads to superior models and a thriving marketplace.

We see DecentralML as an important step towards an inclusive, transparent, and robust AI ecosystem. We're laying the groundwork promoting transparency, collaboration and shared learning among a wider network of participants.

B1) So in other words, the "game theory consensus" will be configurable by using the behavioural strategy?

Yes that's the plan.

B2) What kind of game theory would you use though? Could you elaborate on how that would look like?

The premise is to incentivise high-quality contributions in a trustless environment. Here's a high level how we plan to implement it: Assign a base reward to each Data Annotator for their participation. This encourages all Annotators to contribute. Next, we introduce a validation mechanism. In each round, a subset of annotations made by each Annotator are “gamed”/ selected by the system and presented to other Annotators for validation. This obviously needs to account for the possibility of collusion, so ensure that the identity of the original Annotator is hidden during this process. Validators are then assigned different trust levels based on the level of consensus reached on their annotations (and perhaps a separate extensible rank algorithm). Essentially, the quality of each Annotator's work is assessed trustlessly, based on the degree of agreement among validators. Annotators whose work garners a higher level of agreement are assumed to provide more accurate and high-quality labels, and they receive a larger share of the reward pool. Conversely, those with lower agreement levels receive a smaller share. This happens iteratively, creating a dynamic and self-regulating system. This approach leverages the principles of game theory to foster and reward quality and accuracy.

C1) By Users do you refer to Data Contributors, Model Engineers, and Data Annotators? Or do you mean users that consume the model?

Data Annotators, Model Engineers, Data Contributors AND Clients

C2) Also, is there any mechanism to increase the amount of tokens in the 3 pools or is it a finite amount that the model creator pays and once the pools are drained there'd be no more incentive for the 3 groups to work on it?

Yes and it also takes a percentage of whatever is remaining (given there several decimal places this could go for quite a lot).

C3) I'm confused that you're mentioning data contributors and how they're incentivised here, while earlier you stated that there's going to be only 1 model creator. Or do you mean by that that there is 1 model creator that doesn't bring any data, but there can be multiple data contributors? If the latter is the case, isn't it the same mechanism as described in the linked paper. Hence, it'd raise the question on how the data feed provider and their data can be trusted.

To clarify: Data Annotators, Data Contributors, Model Engineers earn a portion of the staked tokens, with Model Engineers having collective governance and control over the model. Clients benefit by the actual use of the model itself.

You're mentioning G-board, an AI-enhanced keyboard here. Can you estimate how many devices you'd need for the model to become useful here? If it's in the millions (as you mention in the example), do you have a plan on how to onboard them and convince them to use your platform?

The number of Data Contributors varies depending on the specific problem being solved and the parts of the DecentralML framework the model creator chooses to use. The goal with DecentralML is to offer a versatile solution that caters to various businesses, brands, and individuals seeking to leverage Tensorflow FL decentralized governance and incentivization. There might be some gaps in your understanding of Tensorflow/ML/Federated Machine Learning, don't worry. There are numerous possibilities for utilizing DecentralML/Tensorflow FL this link might be of some further assistance

@takahser takahser self-requested a review August 9, 2023 13:35
@semuelle
Copy link
Member

semuelle commented Sep 8, 2023

Hi @livetreetech. Could you let us know your availabilities early next week, so I can schedule a call with members of the committee? Feel free to email me, including email addresses from your team to invite.

@github-actions github-actions bot added the stale label Sep 23, 2023
@semuelle semuelle removed the stale label Sep 24, 2023
Copy link
Contributor

@dsm-w3f dsm-w3f left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@livetreetech thank you for the grant application. I did some research about the theme and tried to digest the long discussion in this PR. My concerns about the application are:

  1. There is no comparison with other solutions and approaches for the same problem. I was able to find this solution for the same theme (https://feltlabs.ai/). Could you compare your proposal with what they are doing? Furthermore, there is literature under the terms Blockchain-based federated learning (BCFL) and Blockchain-empowered Federated Learning. In special, looks like there are many approaches to dealing with incentive models in BCFL. In special, these papers 1 - see section 3.d and 2 - Section 6 mention some of them, but there are more in the literature. How does your proposal fit in the current literature of incentive models for BCFL?

  2. Flexibility. As mentioned, there are many incentive models for BCFL. Would be possible for your proposal to be flexible and accommodate incentive models such as a framework? In this way, would fit better as a common good and the proof that the model works is the responsibility of each instance model. Using a already proof model to show that the framework works also would be nice to avoid skepticism about what you are proposing.

  3. End-to-end (e2e) testing. How do you plan to show that the complete system works? What will be the case to show the federate learning and incentive model working? I think the testing part says unit tests and in the documentation somewhat manual, but when we evaluate a system we need to see all parts working together. Please add e2e testing guide as a deliverable of your proposal and explain what will be the case to show the system working.

semuelle
semuelle previously approved these changes Sep 27, 2023
Copy link
Member

@semuelle semuelle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm interested in the results of this, so I'm giving my approval.

@nikw3f
Copy link
Contributor

nikw3f commented Sep 27, 2023

Would this be able to support custom hardware for machine learning like NVIDIA DGX systems or tensor cores?

@AshleyTuring
Copy link
Contributor Author

AshleyTuring commented Sep 30, 2023

Thanks @dsm-w3f, may I suggest we have another call, to avoid another lengthy exchange, if the following doesn't address your concerns

  1. There is no comparison with other solutions and approaches for the same problem. I was able to find this solution for the same theme (https://feltlabs.ai/). Could you compare your proposal with what they are doing?

Feltlabs seems to be strongly coupled with the Ocean protocol based on the documentation. It uses Ocean’s data reward system for incentives and seems to be specifically designed to fit within the Ocean protocol, utilising algorithms like FedAvg within the confines of ocean (which does not appear scalable to me at all). So, it’s not a widely used framework like Tensorflow but more of a loosely supported FL use case, forced on top of Ocean’s reward protocol. On the other hand, DecentralML is a much more powerful framework. It expands on Tensorflow FL which is specifically designed for this type of ML which through this grant incorporates features of tokenisation and governance, all within the Polkadot ecosystem.

  1. Flexibility. Furthermore, there is literature under the terms Blockchain-based federated learning (BCFL) and Blockchain-empowered Federated Learning. In special, looks like there are many approaches to dealing with incentive models in BCFL. In special, these papers 1 - see section 3.d and 2 - Section 6 mention some of them, but there are more in the literature. How does your proposal fit in the current literature of incentive models for BCFL?As mentioned, there are many incentive models for BCFL. Would be possible for your proposal to be flexible and accommodate incentive models such as a framework? In this way, would fit better as a common good and the proof that the model works is the responsibility of each instance model. Using a already proof model to show that the framework works also would be nice to avoid skepticism about what you are proposing.

DecentralML is designed for flexibility, utilising the Strategy pattern we've previously discussed at length. This allows it to adapt to various approaches and reward models in Blockchain-based federated learning, such as VBFL (see above discussion) and other BCFL mechanisms you've identified. I agree it’s important for the framework to be adaptable and evolve with advances.

  1. End-to-end (e2e) testing. How do you plan to show that the complete system works? What will be the case to show the federate learning and incentive model working? I think the testing part says unit tests and in the documentation somewhat manual, but when we evaluate a system we need to see all parts working together. Please add e2e testing guide as a deliverable of your proposal and explain what will be the case to show the system working.

I've updated the grant proposal in the last pull request and included training using the MNIST dataset; it’s a suitable starting point for FL classification and will serve as a good e2e testing guide. I have updated the grant proposal deliverables, please see 0c: "The guide will have Testing and End-to-End (e2e) Testing... demonstrating federated learning tests using the MNIST dataset for classification."

@AshleyTuring
Copy link
Contributor Author

I'm interested in the results of this, so I'm giving my approval.

Great! Thank you for the support @semuelle

@AshleyTuring
Copy link
Contributor Author

AshleyTuring commented Sep 30, 2023

Thanks @nikw3f

Would this be able to support custom hardware for machine learning like NVIDIA DGX systems or tensor cores?

Absolutely! The Tensorflow library was chosen partly because of significant investments in hardware optimisation, specifically around GPUs and TPUs, through various distribution strategies. Happy to discuss this further on a call if more details are needed.

Copy link
Contributor

@dsm-w3f dsm-w3f left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@livetreetech thank you for the answer and improvement. As far as the incentive strategy is flexible, it can be adjusted and the game theory for showing the incentive model works it is responsibility of the model creator. Having e2e tests enables us to verify the software. Please consider when implementing the test plan to include test cases to show how to change the incentive model using the strategy pattern. I'm considering that other parts of the incentive model will also use strategy pattern and will have test cases to show its flexibility. Correct me if I'm wrong. Considering that, I'm happy to go forward with it.

@AshleyTuring
Copy link
Contributor Author

@livetreetech thank you for the answer and improvement. As far as the incentive strategy is flexible, it can be adjusted and the game theory for showing the incentive model works it is responsibility of the model creator. Having e2e tests enables us to verify the software. Please consider when implementing the test plan to include test cases to show how to change the incentive model using the strategy pattern. I'm considering that other parts of the incentive model will also use strategy pattern and will have test cases to show its flexibility. Correct me if I'm wrong. Considering that, I'm happy to go forward with it.

@dsm-w3f good point we will include tests to demo the change of incentives and the other parts of the incentive model within the strategy. Thank you for the support.

Copy link
Contributor

@keeganquigley keeganquigley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Standing in for @semuelle re-approval

Copy link
Collaborator

@Noc2 Noc2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy to go ahead with it as well. Also given that theoretically it's already approved anyway (once @semuelle reapproves it).

@Noc2 Noc2 merged commit 0aec595 into w3f:master Oct 3, 2023
7 checks passed
@github-actions
Copy link
Contributor

github-actions bot commented Oct 3, 2023

Congratulations and welcome to the Web3 Foundation Grants Program! Please refer to our Milestone Delivery repository for instructions on how to submit milestones and invoices, our FAQ for frequently asked questions and the support section of our README for more ways to find answers to your questions.

Before you start, take a moment to read through our announcement guidelines for all communications related to the grant or make them known to the right person in your organisation. In particular, please don't announce the grant publicly before at least the first milestone of your project has been approved. At that point or shortly before, you can get in touch with us at [email protected] and we'll be happy to collaborate on an announcement about the work you’re doing.

Lastly, please remember to let us know in case you run into any delays or deviate from the deliverables in your application. You can either leave a comment here or directly request to amend your application via PR. We wish you luck with your project! 🚀

@AshleyTuring
Copy link
Contributor Author

@Noc2, many thanks! We appreciate your support!

Our discussion back and forth on the project has been lengthy but informative, and we are excited to bring various PMFs around what this project delivers i.e. tokenisation and governance over foundational brand specific models. We are currently looking at next month to fully release resource. The conversation has created moving timelines around the grant but we have been progressing and developing various use cases. Thank you for the understanding!

@takahser
Copy link
Collaborator

@livetreetech PMFs as in "Product-Market Fits"?
Regarding the timeline, if you're experiencing delays it's usually not a problem, as long as you communicate with us that you're still working on it and if you - once you exceed the deadline - amend your proposal accordingly. 👍

@takahser takahser mentioned this pull request Nov 9, 2023
10 tasks
@AshleyTuring
Copy link
Contributor Author

AshleyTuring commented Nov 10, 2023

Hope this finds you all well! Just to note, we have made a start on the project and added Mathias Ciliberto (https://www.linkedin.com/in/mciliberto/). We will be making commits and commenting here where appropriate during the implementation. Thanks again for the support.

@takahser
Copy link
Collaborator

@livetreetech thanks for keeping us in the loop, appreciated. Just for completeness of content, you amend your proposal and add him to the team? Should be easy and quick to approve from our side. 👍

@AshleyTuring
Copy link
Contributor Author

AshleyTuring commented Nov 14, 2023

Thanks @takahser I did commit the changes (application md attached) - I dont see why it's not reflected. If possible, to save some time please can you commit it from your side (and send some instructions if you get a minute, Ill play around with git asap)?
decentral_ml.md

@AshleyTuring
Copy link
Contributor Author

Hey, hope this finds you well - we have pushed over the delivery milestone 1 and are excited with how the implementation is going, we have good momentum: w3f/Grant-Milestone-Delivery#1079 When you get a minute, can you please give us an indication on when you'll have time to review - we will would like to avoid any delays or pauses in development ? Thank you.

@takahser
Copy link
Collaborator

@livetreetech sorry, I didn't reply to your comment regarding the amendment earlier. If you update your fork and create a new pull request, it should work fine.
Regarding the milestone delivery, thanks for the submission. Unfortunately, this month some team members are out of office for various reasons, hence we're a big behind our backlog. But usually we're giving our teams initial feedback within 2 weeks after milestone submission. LMK if you have any further questions.

@AshleyTuring
Copy link
Contributor Author

Thank you for your update @takahser I've just submitted a new pull request adding Dr. Ciliberto.

I wanted to bring our project's current situation to your attention, the delays you've outlined on your side put us in a tricky position. Our team, including highly paid, sought-after, PhD and professionals, is dedicatedly working on this two-month project. As you can understand, halting their work for an 2+ weeks in a two-month project is not feasible, both practically and financially. Each day of delay adds significant risk and cost to our project. I appreciate the usual process and timelines you follow, but considering our circumstances, I kindly ask if there is any way to expedite the review? We're confident in the quality of our work and you can see the code and concept is solid. We are more than willing to assist in any way to facilitate a quicker evaluation. If it helps, we're open to arranging a call to walk you through our code and address any concerns you might have? Thank you for the understanding.

@keeganquigley
Copy link
Contributor

keeganquigley commented Dec 11, 2023

Hi @livetreetech thanks for your patience, as a majority of the team was out last week. Milestones can be submitted concurrently, so you don't have to wait for this one to be reviewed before moving onto the next one. Therefore you shouldn't need to delay development, you can go ahead and submit the next milestone once it is ready. I hope this helps!

@AshleyTuring
Copy link
Contributor Author

Thank you for your response @keeganquigley. I understand that milestones can be submitted concurrently and appreciate this flexibility. However, I would like to bring to your attention the financial aspects of our project. Currently, I am covering the costs associated with our team's work. I dont mind doing this as I believe in what we are doing, however, this means that as we progress without confirmed funding, the financial risk to increases significantly. I hope you understand that while we are committed to advancing the project, it's crucial to manage these financial commitments responsibly. I believe a direct conversation could be very beneficial to explain the situation and align. I am more than willing to arrange a short call at your earliest convenience to discuss how we can expedite the review process and ensure a smooth continuation of our project work. Your cooperation in this regard would be greatly appreciated. Please let me know a suitable time for a call, and I'll make the necessary arrangements.

@semuelle
Copy link
Member

Hey @livetreetech. I will try to review the milestone tomorrow. I'll let you know if I have any questions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready for review The project is ready to be reviewed by the committee members.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants