Skip to content

Commit

Permalink
Merge
Browse files Browse the repository at this point in the history
Merge branch 'main' into development_dosimetry-metrics
# Please enter a commit message to explain why this merge is necessary,
# especially if it merges an updated upstream into a topic branch.
#
# Lines starting with '#' will be ignored, and an empty message aborts
# the commit.
  • Loading branch information
steffenhartmeyer committed Sep 2, 2024
2 parents 27b705f + f9a4f06 commit d639cb8
Show file tree
Hide file tree
Showing 101 changed files with 1,889 additions and 590 deletions.
Binary file removed .DS_Store
Binary file not shown.
3 changes: 3 additions & 0 deletions .Rbuildignore
Original file line number Diff line number Diff line change
Expand Up @@ -14,3 +14,6 @@
^app\.R$
^vignettes/articles$
^\.gitlab-ci\.yml$
^cran-comments\.md$
^LightLogR-manual\.tex$
^CRAN-SUBMISSION$
3 changes: 3 additions & 0 deletions CRAN-SUBMISSION
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
Version: 0.3.8
Date: 2024-07-04 09:38:13 UTC
SHA: 8714a0b64b88a425313e7a6c19913fb3878cf1d0
17 changes: 6 additions & 11 deletions DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
Package: LightLogR
Title: Work With Data from Wearable Light Loggers and Optical Radiation Dosimeters
Version: 0.3.6
Title: Process Data from Wearable Light Loggers and Optical Radiation Dosimeters
Version: 0.4.0
Authors@R: c(
person("Johannes", "Zauner",
email = "[email protected]", role = c("aut", "cre"),
Expand All @@ -15,17 +15,11 @@ Authors@R: c(
person("EURAMET", role = "fnd", comment = "European Association of National Metrology Institutes. Website: www.euramet.org. Grant Number: 22NRM05 MeLiDos. Grant Statement: The project (22NRM05 MeLiDos) has received funding from the European Partnership on Metrology, co-financed from the European Union’s Horizon Europe Research and Innovation Programme and by the Participating States."),
person("European Union", role = "fnd", comment = "Co-funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or EURAMET. Neither the European Union nor the granting authority can be held responsible for them."),
person("TSCN-Lab", comment = c(URL = "www.tscnlab.org"), role = "cph"))
Description: LightLogR is a package under development as part of the MeLiDos
project aimed at developing a standard workflow for wearable light logger
data and optical radiation dosimeters. MeLiDos is a joint, EURAMET-funded
project involving sixteen partners across Europe. Its primary contributions
towards fostering FAIR data include the development of a common file format,
robust metadata descriptors, and an accompanying open-source software
ecosystem.
Description: Import, processing, validation, and visualization of personal light exposure measurement data from wearable devices. The package implements features such as the import of data and metadata files, conversion of common file formats, validation of light logging data, verification of crucial metadata, calculation of common parameters, and semi-automated analysis and visualization.
License: GPL (>= 3)
Encoding: UTF-8
Roxygen: list(markdown = TRUE)
RoxygenNote: 7.3.1
RoxygenNote: 7.3.2
URL: https://github.com/tscnlab/LightLogR,
https://tscnlab.github.io/LightLogR/,
https://zenodo.org/doi/10.5281/zenodo.11562600
Expand Down Expand Up @@ -58,11 +52,12 @@ Depends:
LazyData: true
Suggests:
covr,
gghighlight,
gt,
gtsummary,
knitr,
patchwork,
rmarkdown,
testthat (>= 3.0.0),
tidyverse
Config/testthat/edition: 3
VignetteBuilder: knitr
2 changes: 2 additions & 0 deletions NAMESPACE
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ export(interdaily_stability)
export(interval2state)
export(intradaily_variability)
export(join_datasets)
export(ll_import_expr)
export(midpointCE)
export(nvRC)
export(nvRC_circadianBias)
Expand All @@ -52,6 +53,7 @@ export(period_above_threshold)
export(pulses_above_threshold)
export(sc2interval)
export(sleep_int2Brown)
export(supported_devices)
export(symlog_trans)
export(threshold_for_duration)
export(timing_above_threshold)
Expand Down
36 changes: 35 additions & 1 deletion NEWS.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,41 @@
# LightLogR 0.3.6
# LightLogR 0.4.0 "Nautical dawn"

* release on CRAN!

* changed the `supported.devices` list to a function `supported_devices()` instead, so the documentation automatically updates with the list of supported devices. Similarly, `ll_import_expr` is now `ll_import_expr()`.

* added support for the Meta `VEET` device for visual experience measurements

* added support for the `Kronowise` device

* added support for the MPI `melanopiQ Circadian Eye` (Prototype)

* rewrote the import function for `Actiwatch_Spectrum`, as the sample file the original was based off, had specific formatting to German standards. Now, the German version can still be called through `Actiwatch_Spectrum_de`, wheras the main function refers to the english/international format.

* updated the landing page for the website with a list of supported devices and a table of metrics

* small changes to documentation

# LightLogR 0.3.8

* Submission to CRAN

# LightLogR 0.3.7 "Astronomical dawn"

* Changes to the tutorial articles on the website

* Integration of a community survey on the website and Github Readme.

# LightLogR 0.3.6

* `bright_dark_period()` now maintains the date when looping the data.

* Added articles on `Import & Cleaning`, `Metrics`, and `Visualizations` to the website.

* Added the option for more print rows of observation intervals during `import`.

* Added the option to set a length for the dataset starting from the end in `filter_Datetime()` and family.

# LightLogR 0.3.5

* Added the function `aggregate_Date()` to aggregate long datasets to one day per group.
Expand Down
3 changes: 1 addition & 2 deletions R/aaa.r
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Time <- mEDI <- Time.data <- Datetime <- timestamp <- tz <- Day.data <- `DATE/TIME` <- n <- Datetime.rounded <- id <- sleep.colname.string <- file.name <- Interval <- original.datapoints.fleeting <- MEDI <- State.Brown <- Reference <- Reference.check <- Id <- Start.date.shift <- data <- Shift <- `MELANOPIC EDI` <- State <- group <- End <- Start <- Quant.x <- Quant.y <- is.implicit <- group.indices <- Id2 <- gap.id <- start <- end <- path <- auto.id <- n_max <- manual.id <- silent <- Light <- Day <- N <- is_missing <- Hour <- .change <- dst_start <- .dst <- .dst2 <- dst_adjustment <- auto.plot <- group.1 <- group.2 <- group.indices2 <- cluster_start <- cluster_end <- row_idx <- is_cluster <- cluster_idx <- is_pulse <- pulse_idx <- light <- time <- level <- duration <- mean_duration <- onset <- midpoint <- offset <- mean_onset <- mean_midpoint <- mean_offset <- Date.data <- NULL
Time <- mEDI <- Time.data <- Datetime <- timestamp <- tz <- Day.data <- `DATE/TIME` <- n <- Datetime.rounded <- id <- sleep.colname.string <- file.name <- Interval <- original.datapoints.fleeting <- MEDI <- State.Brown <- Reference <- Reference.check <- Id <- Start.date.shift <- data <- Shift <- `MELANOPIC EDI` <- State <- group <- End <- Start <- Quant.x <- Quant.y <- is.implicit <- group.indices <- Id2 <- gap.id <- start <- end <- path <- auto.id <- n_max <- manual.id <- silent <- Light <- Day <- N <- is_missing <- Hour <- .change <- dst_start <- .dst <- .dst2 <- dst_adjustment <- auto.plot <- group.1 <- group.2 <- group.indices2 <- cluster_start <- cluster_end <- row_idx <- is_cluster <- cluster_idx <- is_pulse <- pulse_idx <- light <- time <- level <- duration <- mean_duration <- onset <- midpoint <- offset <- mean_onset <- mean_midpoint <- mean_offset <- Date.data <- print_n <- NULL

empty_function <- function() {
rsconnect::accountInfo()
Expand All @@ -7,7 +7,6 @@ empty_function <- function() {
}

.onLoad <- function(libname, pkgname) {
utils::globalVariables("supported.devices")
utils::globalVariables(".")
}

4 changes: 2 additions & 2 deletions R/aggregate_Date.R
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ aggregate_Date <- function(dataset,
date.handler = stats::median,
numeric.handler =
mean,
character.handler =
character.handler =
\(x) names(which.max(table(x, useNA = "ifany"))),
logical.handler =
\(x) mean(x) >= 0.5,
Expand Down Expand Up @@ -117,7 +117,7 @@ aggregate_Date <- function(dataset,
dataset %>%
create_Timedata(Datetime.colname = !!Datetime.colname.defused) %>%
dplyr::mutate(Date.data = lubridate::date(!!Datetime.colname.defused),
Date.data = (!!date.handler)(unique(Date.data))) #set the date according to the date handler
Date.data = (!!date.handler)(Date.data)) #set the date according to the date handler

#group by Time.data
dataset <-
Expand Down
2 changes: 1 addition & 1 deletion R/cut_Datetime.R
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
#' `Datetime.colname`.
#' @param unit Unit of binning. See [lubridate::round_date()] for examples. The default is `"3 hours"`.
#' @param type One of `"round"`(the default), `"ceiling"` or `"floor"`. Setting
#' chooses the relevant function from [lubridate].
#' chooses the relevant function from \pkg{lubridate}.
#' @param Datetime.colname column name that contains the datetime. Defaults to
#' `"Datetime"` which is automatically correct for data imported with
#' [LightLogR]. Expects a `symbol`. Needs to be part of the `dataset`.
Expand Down
30 changes: 1 addition & 29 deletions R/data.r
Original file line number Diff line number Diff line change
Expand Up @@ -17,32 +17,4 @@
#' \item{Id}{A `character` vector indicating whether the data is from the `Participant` or from the `Environment`.}
#' }
#' @source <https://www.tscnlab.org>
"sample.data.environment"


#' A vector of all supported devices for import functions
#'
#' These are all supported devices where there is a dedicated import function.
#' Import functions can be called either through [import_Dataset()] with the
#' respective `device = "device"` argument, or directly, e.g.,
#' `import$ActLumus()`.
#'
#' @format `supported.devices` A character vector, listing all supported devices
#' \describe{
#' \item{suppored.devices}{strings}
#' }
"supported.devices"

#' A list of the specific device import functions
#'
#' These expressions are used to import and prepare data from specific devices.
#' The list is made explicit, so that a user, requiring slight changes to the
#' import functions, (e.g., because a timestamp is formatted differently) can
#' modify or add to the list. The list can be turned into a fully functional
#' import function through `import_adjustment()`.
#'
#' @format `ll_import_expr` A list, with specific expressions for each supported device
#' \describe{
#' \item{ll_import_expr}{expressions}
#' }
"ll_import_expr"
"sample.data.environment"
30 changes: 24 additions & 6 deletions R/filter_Datetime.R
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#' Filtering a dataset based on Dates or Datetimes may often be necessary prior
#' to calcuation or visualization. The functions allow for a filtering based on
#' simple `strings` or `Datetime` scalars, or by specifying a length. They also
#' support prior [dplyr] grouping, which is useful, e.g., when you only want to
#' support prior \pkg{dplyr} grouping, which is useful, e.g., when you only want to
#' filter the first two days of measurement data for every participant,
#' regardless of the actual date. If you want to filter based on times of the
#' day, look to [filter_Time()].
Expand All @@ -18,10 +18,10 @@
#' * If `length` is provided and one of start/end is not, the other will be calculated based on the given value.
#' * If `length` is provided and both of start/end are NULL, the time from the
#' respective start is taken.
#' @param length Either a Period or Duration from [lubridate]. E.g., `days(2) +
#' @param length Either a Period or Duration from \pkg{lubridate}. E.g., `days(2) +
#' hours(12)` will give a period of 2.5 days, whereas `ddays(2) + dhours(12)`
#' will give a duration. For the difference between periods and durations look
#' at the documentation from [lubridate]. Basically, periods model clocktimes,
#' at the documentation from \pkg{lubridate}. Basically, periods model clocktimes,
#' whereas durations model physical processes. This matters on several
#' occasions, like leap years, or daylight savings. You can also provide a
#' `character` scalar in the form of e.g. "1 day", which will be converted
Expand All @@ -37,6 +37,10 @@
#' is FALSE). This is useful, e.g., when the first observation in the dataset
#' is slightly after midnight. If TRUE, it will count the length from midnight
#' on to avoid empty days in plotting with [gg_day()].
#' @param length_from_start A `logical` indicating whether the `length` argument
#' should be applied to the start (default, TRUE) or the end of the data
#' (FALSE). Only relevant if neither the `start` nor the `end` arguments are
#' provided.
#' @param only_Id An expression of `ids` where the filtering should be applied
#' to. If `NULL` (the default), the filtering will be applied to all `ids`.
#' Based on the this expression, the dataset will be split in two and only
Expand Down Expand Up @@ -93,6 +97,7 @@ filter_Datetime <- function(dataset,
start = NULL,
end = NULL,
length = NULL,
length_from_start = TRUE,
full.day = FALSE,
tz = NULL,
only_Id = NULL,
Expand Down Expand Up @@ -156,12 +161,22 @@ filter_Datetime <- function(dataset,
start <- dataset[[Datetime.colname.defused]] %>% min()
}

#calculate end time if length is given


#calculate end time if length is given & length_from_start is TRUE
if(is.null(end) & !is.null(length)) {
if(length_from_start) {
end <- switch (full.day %>% as.character(),
"TRUE" = lubridate::as_date(start, tz = tz),
"FALSE" = lubridate::as_datetime(start, tz = tz)
) + length
} else {
end <- dataset[[Datetime.colname.defused]] %>% max()
start <- switch (full.day %>% as.character(),
"TRUE" = lubridate::as_date(end, tz = tz),
"FALSE" = lubridate::as_datetime(end, tz = tz)
) - length
}
}

#calculate end time if NULL
Expand Down Expand Up @@ -233,7 +248,10 @@ filter_Date <- function(...,
#' to be quoted with [quote()] or [rlang::expr()].
#' @param filter_function The function to be used for filtering, either
#' `filter_Datetime` (the default) or `filter_Date`
#' @param ... Additional arguments passed to the filter function
#' @param ... Additional arguments passed to the filter function. If the
#' `length` argument is provided here instead of the `argument`, it has to be
#' written as a string, e.g., `length = "1 day"`, instead of `length =
#' lubridate::days(1)`.
#'
#' @return A dataframe with the filtered data
#' @export
Expand All @@ -245,7 +263,7 @@ filter_Date <- function(...,
#' #compare the unfiltered dataset
#' sample.data.environment %>% gg_overview(Id.colname = Id)
#' #compare the unfiltered dataset
#' sample.data.environment %>%
#' sample.data.environment %>%
#' filter_Datetime_multiple(arguments = arguments, filter_Date) %>%
#' gg_overview(Id.colname = Id)
filter_Datetime_multiple <- function(dataset,
Expand Down
8 changes: 5 additions & 3 deletions R/gg_day.r
Original file line number Diff line number Diff line change
Expand Up @@ -60,15 +60,15 @@
#' function (see examples).
#' @param x.axis.breaks,y.axis.breaks Where should breaks occur on the x and
#' y.axis? Expects a `numeric vector` with all the breaks. If you want to
#' activate the default behaviour of [ggplot2], you need to put in
#' activate the default behaviour of \pkg{ggplot2}, you need to put in
#' [ggplot2::waiver()].
#' @param y.scale.sc `logical` for whether scientific notation shall be used.
#' Defaults to `FALSE`.
#' @param geom What geom should be used for visualization? Expects a `character`
#' * `"point"` for [ggplot2::geom_point()]
#' * `"line"` for [ggplot2::geom_line()]
#' * `"ribbon"` for [ggplot2::geom_ribbon()]
#' * as the value is just input into the `geom_` function from [ggplot2], other variants work as well, but are not extensively tested.
#' * as the value is just input into the `geom_` function from \pkg{ggplot2}, other variants work as well, but are not extensively tested.
#' @param group Optional column name that defines separate sets. Useful for
#' certain geoms like `boxplot`.Expects anything that works with the layer
#' data [ggplot2::aes()]
Expand Down Expand Up @@ -238,7 +238,9 @@ gg_day <- function(dataset,
# Scales --------------------------------------------------------------
jco_color_scheme+
ggplot2::scale_x_time(breaks = x.axis.breaks,
labels = scales::label_time(format = "%H:%M")) +
labels = scales::label_time(format = "%H:%M"),
expand = c(0,0),
limits = c(0,24*3600)) +
ggplot2::scale_y_continuous(
trans = y.scale,
breaks = y.axis.breaks,
Expand Down
2 changes: 1 addition & 1 deletion R/gg_days.R
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
#' @param y.axis.breaks Where should breaks occur on the y.axis? Expects a
#' `numeric vector` with all the breaks or a function that calculates them
#' based on the limits. If you want to activate the default behaviour of
#' [ggplot2], you need to put in [ggplot2::waiver()].
#' \pkg{ggplot2}, you need to put in [ggplot2::waiver()].
#' @param x.axis.breaks The (major) breaks of the x-axis. Defaults to
#' [Datetime_breaks()]. The function has several options for adjustment. The
#' default setting place a major break every 12 hours, starting at 12:00 of
Expand Down
7 changes: 5 additions & 2 deletions R/gg_doubleplot.R
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ gg_doubleplot <- function(dataset,
type = c("auto", "repeat", "next"),
geom = "ribbon",
alpha = 0.5,
col = "#EFC000FF",
col = "grey40",
fill = "#EFC000FF",
linewidth = 0.4,
x.axis.breaks.next = Datetime_breaks,
Expand Down Expand Up @@ -120,7 +120,10 @@ gg_doubleplot <- function(dataset,
linewidth = linewidth,
x.axis.breaks = x.axis.breaks,
x.axis.format = x.axis.format,
x.axis.limits = \(x) Datetime_limits(x, length = lubridate::ddays(1), doubleplot = TRUE),
x.axis.limits =
\(x) Datetime_limits(
x, length = lubridate::ddays(1), midnight.rollover = TRUE
),
...
))
}
Loading

0 comments on commit d639cb8

Please sign in to comment.