opensource-wg
opensource-wg copied to clipboard
[Project Update] Impact Framework
Impact Framework
@jmcook1186 GSF PM @srini1978 of Microsoft - https://github.com/Green-Software-Foundation/carbon-ql @navveenb of Accenture - https://github.com/Green-Software-Foundation/carbon-ql
In #75 WG - @srini1978 mentioned meetings have kicked off for WG, w/Amadeus and others, with all those who expressed interest. Readme has been reviewed; and identified how we might identify SCI for various types of project, on-prem, managed services, cloud etc. User stories created, good candidates for minimum lovable product.
@Oleg-Zhymolokhov / @srini1978 gave an update in OSS WG:
looking for direction for the project to add value to existing projects. Originally planned for an API for those unfamiliar with SCI calculation to generate SCI score in user-friendly way. Feedback that a lot of companies are using Cloud Carbon Footprint and other APIs. @jawache gave suggestions for analysing this. 1) use sci-ontology project to drive the scope of the architecture of the application. aka: users build app architecture in SCI-Ontology. @navveenb making examples to provide ability to enter data, and sci-ontology would generate a score from that data. 2) CarbonQL to be an SDK that provides set of modules and applications to use these modules; describe the architecture in code, e.g. VMs in Azure or other cloud. Looking ahead, and figuring out where CarbonQL will have a sweet spot, looking at integration possibilities. SCI-Ontology has GUI for describing software boundary. Chatting with Amadeus at the moment.
Suggest discussing in GitHub to get input. https://github.com/Green-Software-Foundation/opensource-wg/discussions/79
Filippo from Shell mentioned going through same process; trying to do more than Cloud Carbon Footprint.
Update from @srini1978:
Had a demo from SCI Ontology project. High level alignment: build an extendible SDK that can help anyone plug and play model, telemetry of cloud or on prem. First cut: focused on cloud providers. Intent will be to build from bottom up a few model (different data providers: Cloud Carbon Footprint, Etsy, GCP, etc). Goal is for people to provide input parameters and utilize the SKD interfaces, and extend them. This discussion summarizes where the group aligned on: https://github.com/Green-Software-Foundation/carbon-ql/discussions/26
Next Steps: build this design spec, based on the discussion above ^
@Oleg-Zhymolokhov update: missed the call this week, but Srini is working on some code snippets. Also investigating two models: Cloud Carbon Footprint and Etsy's approach. Some notes have been prepared.
@srini1978 / @navveenb : if you could share any notes when you have a chance, it would be great. No rush though.
Reviewed the CQL design doc with @Oleg-Zhymolokhov ( https://docs.google.com/document/d/1vJKIEMFId5rl81GaWUXBWtyUhALyqJjeMq_ViTtKX1Q/edit#heading=h.dt3dajceouk6 ) - which compared CCF and the Etsy implementation of carbon measurement.
@Oleg-Zhymolokhov Planning to get requirements from @navveenb, working with the SCI Ontology project Python developer/resource.
Query from @Willmish regarding the scope of CarbonQL vs Carbon Aware SDK: https://github.com/Green-Software-Foundation/carbon-ql/issues/32
Update from @srini1978, WG discussed having identified the two parts we want to build.
- The interface/application model - to define & build the boundary of the application (to be able to call out the infra components that someone may want to define as part of the software being built (serverless, laptops, etc));
- The backend piece - the carbon model - no longer an API, but a framework to integrate with any backend model to calculate operational emissions.
- Standardise the way we call these models - either building, or integrating with existing models
- So that any customer who wants to understand how to calculate emissions has a standardised interface specification to the backend models
Specification still under review, WG/project team met over the last few weeks to discuss.
@Oleg-Zhymolokhov, @jawache and @Willmish tried to connect; decided to discuss in this forum.
@Sealjay mentioned this might cross over with the real time emissions data spec; tightly coupled projects.
@jawache covered this project, and highlighted the positive nature of the project in how we're treating the project in incubation and identifying the solution.
- Carbon Aware SDK: Clear overlap - original proposal for Carbon Aware SDK was to create an API, which then became an interface/facade pattern to WattTime & ElectricityMap and other data providers.
- CarbonQL: No API, a software interface to other objects, could be files, or other things - a common way to share carbon intensity data. Not yet defined how this will manifest.
- Hope: A common series of software interfaces for carbon data (as above, the data providers) - and then if that exists, potentially a layer below or consumed from Carbon Aware SDK.
@Willmish confirmed that a standard interface to extract data from various services already built into Carbon Aware SDK - how would this differ from our approach to support a new datasource?
@Willmish shared current process for CA SDK: https://github.com/Green-Software-Foundation/carbon-aware-sdk/blob/dev/docs/architecture/data-sources.md#creating-a-new-data-source
Next steps:
- @Willmish to work through Carbon Aware SDK components on next CarbonQL call to get the learnings from SDK - and whiteboard the components SDK already has.
- Action to add project calls to the Open Source Calendar
@srini1978 -- could you please update here on any progress since the last OSSWG call? Thank you!
IEF Update:
@jawache
Repository: https://github.com/Green-Software-Foundation/ief
- Several rounds of iteration on the IEF specification, now very close to an agreed v1
- Development of a CLI known as
Rimplthat executes model plugins - Development of several (~15) "builtin" models for
Rimpl - Several real world case studies implemented through
Rimpl, including an "external" model plugin developed by third party team - Demo of complete PoC in the IEF weekly meeting
Ongoing tasks:
- WIP: User and developer documentation
- WIP: test coverage
- WIP: time and impact aggregation process
- WIP: contribution guidelines
Since last time, we have:
- updated the specification and naming conventions across the project
- added new models, including our Azure importer
- pulled the models into separate repositories from the framework
- published the framework and models as npm packages, @grnsft
- created unit tests with 100% coverage
- created a set of integration tests
- created a project documentation website
- created user tutorials and an onboarding video
- demo'd the system in the IEF weekly meetings
- released v0.1.0 aka "alpha"!
Next up:
- demo at Decarb
- expand the set of available models (Co2js, Carbon-SDK, enhanced Azure importer, etc)
- more walk-through guides and videos
- possibly update the current methodology for time normalization and metric aggregation
@jmcook1186 Late reminder but please can you add an update here ahead of the OSWG later today.
Since last time, we have:
- post-alpha cleanup
- improved error handling
- surfacing more specific error messages to the user
- started implemnting node-level and graph-level aggregation
- refined project management
Next up:
- finish aggregation implementation
- time normalization
- additional data importers
- expanding the set of model plugins
since last time we have:
- redesigned our time sync and aggregation requirements
- started implementing time sync/aggregation
- implemented new models
- refactoring
- started preparing for Carbon Hack
next up:
- focus on carbon hack
- get time sync and aggregation perfect
Update from @Sophietn at OSWG on 2024 01 09:
- Project getting everything in order ahead of the Carbon Hack
- Main tasks being finished are time syncing and aggregation features, improving documentation, hack repo and UX for hackers.
Project lead please add any additional updates, thank you :)
Hello, here are IF updates for 23/1/23
- we have hired a new developer onto the team
- we have started doing weekly livestreams and other hackathon onboarding activities
- we are midway through a complete overhaul of the documentation website
- we have shipped the final, fully featured version of our time-sync plugin
- we have shipped a new aggregation feature that summarizes impacts over time and across components
- we have defined a set of best practises and refactored all our plugins to conform
- we have specified two major upgrades to be implemented in the next 2-3 sprints (upgrading units.yml and changing the plugin signature)
After the units and signature upgrades our focus will be primarily on developer experience upgrades and hackathon support.
Hi - here are updates from the IF for 6/2/24:
- we have reviewed and merged some fairly substantial community PRs
- we have been refining hackathon ideas in the discussion and issue boards on gsf/hack
- we have started a refactor of the core IF infrastructure that we expect to take us until the end of next week
- we have fixed up all our plugins according to our latest coding guidelines
@jmcook1186 please could you share an update here :)
Update for IF 20/02/24
- we have been very focused on refactoring our codebase since the last call - this is now ~90% finished
- in the current sprint we are doing cleanup, documentation updates and starting a training workshop for helping people to understand IF
Update for 5/3/24
- completed IF refactor
- shipped full aggregation, groupby and exhaust features
- updated all docs
- started formal QA process
- begun cleanup tasks and preparing for code freeze