Changelog¶
All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
[Unreleased]¶
[0.7.3] - 2023-05-17¶
[0.7.3] - Fixed¶
Compatibility with newer PyTorch Benchmark version.
[0.7.2] - 2022-06-03¶
[0.7.2] - Added¶
Version for protobuf during build.
Conditional install of redis on win platforms
[0.7.1] - 2022-03-18¶
[0.7.1] - Fixed¶
Device transfer in benchmark.
[0.7.0] - 2022-03-18¶
[0.7.0] - Added¶
Defensive fallback for FLOPs measurement.
Add MultiStepLR optimizers.
[0.7.0] - Changed¶
Profiling to use
pytorch_benchmark
package.
[0.7.0] - Fixed¶
WandB logger log_dir extraction.
[0.6.1] - 2022-02-07¶
[0.6.1] - Changed¶
Profile only warms up on first inference.
[0.6.0] - 2022-01-27¶
[0.6.0] - Added¶
Memory profiling.
[0.6.0] - Fixed¶
Tune DeprecationWarning.
[0.5.1] - 2021-11-16¶
[0.5.1] - Added¶
Add pred and target dict support in Lifecycle.
[0.5.1] - Fixed¶
Avoid detaching loss in step.
[0.5.0] - 2021-11-12¶
[0.5.0] - Added¶
Add preprocess_batch method to Lifecycle.
Add option for string type in utils.name.
Add Metric Selector.
[0.5.0] - Fixed¶
Weight freezing during model loading.
Fix discriminative_lr param selection for NoneType parameters.
Fix wandb project naming during hparamsearch.
Optimizer Schedulers take
accumulate_grad_batches
into account.
[0.5.0] - Changed¶
Key debug statements while loading models to include both missing and unexpected keys.
Bumped PL to version 1.4. Holding back on 1.5 due to Tune integration issues.
Bumped Tune to version 1.8.
[0.4.6] - 2021-09-21¶
[0.4.6] - Fixed¶
Update profile to use model.call. This enable non-
forward
executions during profiling.Add DefaultMethods Mixin with
warm_up
to makewarm_up
overloadable by Mixins.
[0.4.5] - 2021-09-08¶
[0.4.5] - Fixed¶
Fix
warm_up
function signature.Requirement versions.
[0.4.4] - 2021-09-08¶
[0.4.4] - Added¶
warm_up
function that is called prior to profil .
[0.4.4] - Fixed¶
Learning rate schedulers discounted steps.
[0.4.3] - 2021-06-03¶
[0.4.3] - Added¶
Logging of layers that are unfrozen.
[0.4.3] - Fixed¶
Cyclic learning rate schedulers now update on step.
[0.4.2] - 2021-06-02¶
[0.4.2] - Added¶
Added explicit logging of model profiling results.
Automatic assignment of hparams.num_gpus.
[0.4.2] - Fixed¶
Finetune weight loading checks.
Cyclic learning rate schedulers account for batch size.
[0.4.1] - 2021-05-27¶
[0.4.1] - Fixed¶
Feature extraction on GPU.
[0.4.1] - Added¶
Added explicit logging of hparams.
[0.4.0] - 2021-05-17¶
[0.4.0] - Fixed¶
Pass args correctly to trainer during testing.
[0.4.0] - Changed¶
CheckpointEveryNSteps now included in ModelCheckpoint c.f. pl==1.3.
Import from torchmetrics instead of pl.metrics .
Moved confusion matrix to RideClassificationDataset and updated plot.
[0.4.0] - Added¶
Feature extraction and visualisation.
Lifecycle and Finetuneable mixins always included via RideModule.
Support for pytorch-lightning==1.3.
Additional tests: Coverage is now at 92%.
[0.4.0] - Removed¶
Support for nested inheritance of RideModule.
Support for pytorch-lightning==1.2.
[0.3.2] - 2021-04-15¶
[0.3.2] - Fixed¶
Project dependencies: removed click and added psutil to requirements.
Logging: Save stdout and stderr to run.log.
[0.3.2] - Changed¶
Logged results names. Flattened folder structure and streamlines names.
[0.3.2] - Added¶
Docstrings to remaining core classes.
Tests that logged results exists.
[0.3.1] - 2021-03-24¶
[0.3.1] - Added¶
Add support for namedtuples in dataset
input_shape
andoutput_shape
.Add tests for test_enemble.
Expose more classes via
from ride import XXX
.Fix import-error in hparamsearch.
Fix issues in metrics and add tests.
Remove unused cache module.
[0.3.1] - Change¶
Renamed
Dataset
toRideDataset
.
[0.3.0] - 2021-03-24¶
[0.3.0] - Added¶
Documentation for getting started, the Ride API, and a general API reference.
Automatic import of
SgdOptimizer
.
[0.3.0] - Change¶
Renamed
Dataset
toRideDataset
.
[0.2.0] - 2021-03-23¶
[0.2.0] - Added¶
Initial publicly available implementation of the library.