The document summarizes an optimization technique used to adjust air pollution emissions rates in an air quality model using data from low-cost air quality sensors. The technique develops an inversion method to automatically adjust emissions inputs to improve model predictions against monitored concentrations. Preliminary tests of the technique in Cambridge, UK optimized NOx emissions rates from 305 road sources against data from 20 low-cost sensors and 5 reference monitors. The optimization reduced errors between modeled and monitored concentrations and adjusted emissions profiles and rates in a physically reasonable manner.
Marc stettler modelling of instantaneous vehicle emissions - dmug17IES / IAQM
DMUG remains the key annual event for experts in this field. Unmissable speakers will be examining topical issues in emissions, exposure and dispersion modelling.
Roger Barrowcliff - Chairman's introduction to vehicle section - DMUG17IES / IAQM
An unapologetically technical conference, DMUG remains the key annual event for experts in this field. Unmissable speakers will be examining topical issues in emissions, exposure and dispersion modelling.
Dr James Tate - Better estimation of vehicle emissions for modelling - DMUG17IES / IAQM
An unapologetically technical conference, DMUG remains the key annual event for experts in this field. Unmissable speakers will be examining topical issues in emissions, exposure and dispersion modelling.
Dr Glyn Rhys-Tyler - Road vehicle exhaust emissions; 'an age of uncertainty' ...IES / IAQM
DMUG remains the key annual event for experts in this field. Unmissable speakers will be examining topical issues in emissions, exposure and dispersion modelling.
Emma Gibbons - Model uncertainty in the assessment of major infrastructure pr...IES / IAQM
DMUG remains the key annual event for experts in this field. Unmissable speakers will be examining topical issues in emissions, exposure and dispersion modelling.
This document summarizes research on integrating traffic and emission models to simulate the impacts of traffic on emissions. It describes:
1) Developing an approach to combine traffic simulation and emission models in a distributed way.
2) Proposing a method to calibrate microscopic emission models using aggregate emission measures.
3) Applying the integrated models to evaluate how different traffic demands and signal controls impact emissions.
The ERMES Group coordinates research on mobile emission sources in Europe. It brings together transport emission modellers, researchers, funding agencies, and industry representatives. ERMES aims to become a permanent network that coordinates research programs to improve transport emission inventories in Europe. Key activities include harmonizing emission measurement procedures, sharing emission data, overseeing leading vehicle emission models like COPERT and HBEFA, and prioritizing future research issues.
LandGEM is a Microsoft Excel-based tool that estimates landfill gas emissions, including methane and other pollutants, from municipal solid waste landfills. It uses a first-order decomposition rate equation to model emissions over time based on landfill-specific characteristics and waste acceptance rates entered by the user. The model consists of worksheets for inputs, outputs, graphs, and a report. It provides a simple approach to estimate emissions but has limitations such as not accounting for changes in landfill operations.
IRJET- Study of Vehicular Exhaust Emission EstimationIRJET Journal
This study examined vehicular exhaust emissions using experimental testing and the COPERT emissions model. Researchers measured carbon monoxide (CO), hydrocarbon (HC), and nitrogen oxide (NOx) emissions from vehicle exhaust under different operating conditions, including idling and driving cycles. Emissions were compared to COPERT estimates. While CO and HC emissions correlated well with COPERT, measured NOx levels were higher. The study found CO emissions were overestimated by COPERT, while NOx was underestimated. Critical vehicle speeds that impact CO and NOx variations were identified. The research provides insight into exhaust emissions and helps validate the COPERT model.
This is a presentation giving an introduction to the LandGEM model released by USEPA. It takes the student through a quick case study of the Pirana Landfill in India.
Meant for educational purposes only.
This document summarizes two projects aimed at improving emissions inventories for Ireland. The first project aims to develop accurate emission factors for NOx and particulate matter from small combustion installations like residential heating. Researchers are taking real-world measurements using testing equipment on a combustion rig with an Irish duty cycle. The second project aims to estimate road vehicle emissions using methods like ARTEMIS, which models emissions based on driving cycles and vehicle parameters. Researchers are collecting real-world driving data using onboard diagnostics loggers to input into emissions models.
- The document discusses the need to update the anthropogenic and biogenic emissions datasets used in CAMS simulations and analyses.
- It describes work being done to develop new, higher resolution global and European anthropogenic emissions inventories based on recent developments and incorporating temporal and speciation information.
- Issues with current datasets like inconsistent species emissions and lack of recent years are noted. Developing updated datasets exceeds current contract scopes.
- Evaluation of inverse modeling results and recommendations for improving emissions specification, resolution and temporal profiles are discussed.
The document discusses integrating European TCCON sites into the ICOS Research Infrastructure. TCCON provides complementary and reference data for satellite validation by measuring total column concentrations of greenhouse gases. Currently, European TCCON sites rely on short-term project funding, threatening continued operations. Integrating sites into ICOS could provide long-term funding and support from a proposed Atmospheric Column Thematic Centre. This would help preserve the network while centralizing quality control, data handling, and supporting innovation to improve satellite validation and calibration efforts. However, some open issues around instrumentation, data exchange, and national funding commitments would still need to be addressed.
This document presents a study applying a General Finite Line Source Model (GFLSM) to predict hydrocarbon concentrations on the M50 motorway in Ireland. The study compares monitored hydrocarbon data to concentrations modeled by GFLSM and the CALINE4 model at receptor points along the motorway. Results show that at distances of 25 meters and 120 meters from the road, GFLSM predictions matched monitored data better than CALINE4. At 240 meters, both models showed similar accuracy compared to monitored data. Overall, the study found GFLSM to be a satisfactory model for predicting vehicular air pollution along the M50 motorway.
Presentation given by Dr Zia Wadud at the18th World Conference of the Air Transport Research Society, Bordeaux, France, July 2014.
atrs2014.org
www.its.leeds.ac.uk/people/z.wadud
Routes to Clean Air 2016 - Dr Norbert Ligterink - TNOIES / IAQM
Talk title: NOx and NO2 Emissions of diesel vehicles.
Routes to Clean Air is a two-day conference from the IAQM where academics, professionals and policy makers share their experiences of improving traffic emissions.
This event highlights the importance of public communication and behavioural change surrounding road transport and air quality issues.
At the 2014 annual Dispersion Modellers user group meeting guest speaker Christine McHugh spoke on the topic: 'Comparison of Air Quality in World Cities'
1) There are uncertainties in projecting future vehicle emission factors due to an inability to predict real-world driving behavior and how it may differ from emission standards testing.
2) Comparing older COPERT II and updated COPERT IV models, along with updated activity and emission factor data, shows higher NOx emissions than targets in Germany and Ireland.
3) The differences between actual emissions and original targets result from both higher emission factors, due to real-world driving emitting more than tests, and higher activity levels, due to misallocated vehicle types and usage assumptions.
Effects of Errors on Vehicle Emission Rates from Portable Emissions Measure...Gurdas Sandhu
1. The document discusses research on quantifying the effects of errors in measurements from portable emissions measurement systems (PEMS) on reported vehicle emission rates.
2. It outlines methods to calculate mass per time emission rates from 1 Hz sensor data and evaluate the sensitivity of rates to errors in parameters like engine RPM, manifold pressure, and exhaust gas concentrations.
3. It also describes approaches to synchronize data streams from multiple instruments, including visual alignment of indicator variables and computational optimization of correlation between variables.
Routes to Clean Air 2016 - Dr Christine McHugh & Marilena Karyampa, ArupIES / IAQM
Talk title: The PCM Model and Modelling Uncertainty
Routes to Clean Air is a two-day conference from the IAQM where academics, professionals and policy makers share their experiences of improving traffic emissions.
This event highlights the importance of public communication and behavioural change surrounding road transport and air quality issues.
At the 2014 annual Dispersion Modellers user group meeting guest speaker James Tate spoke the topic: 'Making better use of microsimulation models for estimating vehicle emissions'
Signalized Intersections (Transportation Engineering)Hossam Shafiq I
This document provides an overview of signalized intersection analysis and optimization for a transportation engineering course. It defines key terms related to signal timing, describes methods for calculating vehicle delay under uniform and random traffic arrivals, and approaches for optimizing cycle length, green time allocation, and level of service. Examples are provided to illustrate calculations for critical lane group volume-to-capacity ratio, total lost time, optimal signal timing, green time distribution, and intersection level of service.
This project studies carbonyl sulfide (COS) as a way to observe the climate system. It has received funding from the European Research Council. The project uses inverse modeling and global modeling to understand the global COS budget and exchange with the biosphere, with large uncertainties currently. Measurements of COS and its isotopologues are being made from air samples collected by balloons and AirCore samplers. A balloon flight called HEMERA is planned for 2021 to sample the stratosphere up to 35km. The goals are to better understand the effect of COS on stratospheric sulfate aerosols and its role in tropospheric chemistry and exchange with ecosystems.
At the 2014 annual Dispersion Modellers user group meeting guest speaker Sean Beevers spoke on the topic: 'Update on progress with the development of a hybrid personal exposure model'
Presentation by Dr James Tate at Institute of Air Quality Management (IAQM) Dispersion Modellers User Group December 2014.
www.its.leeds.ac.uk/people/j.tate
http://iaqm.co.uk/event/dmug-2014/
This paper applies inverse transform sampling to sample training points for surrogate models. Inverse transform sampling uniformly generates a sequence of real numbers ranging from 0 to 1 as the probabilities at sample points. The coordinates of the sample points are evaluated using the inverse functions of Cumulative Distribution Functions (CDF). The inputs to surrogate models are assumed to be independent random variables. The sample points obtained by inverse transform sampling can effectively represent the frequency of occurrence of the inputs. The distributions of inputs to the surrogate models are fitted to their observed data. These distributions are used for inverse transform sampling. The sample points have larger densities in the regions where the Probability Density Functions (PDF) are higher. This sampling approach ensures that the regions with higher densities of sample points are more prevalent in the observations of the random variables. Inverse transform sampling is applied to the development of surrogate models for window performance evaluation. The distributions of the following three climatic conditions are fitted: (i) the outside temperature, (ii) the wind speed, and (iii) the solar radiation. The sample climatic conditions obtained by the inverse transform sampling are used as training points to evaluate the heat transfer through a generic triple pane window. Using the simulation results at the sample points, surrogate models are developed to represent the heat transfer through the window as a function of the climatic conditions. It is observed that surrogate models developed using the inverse transform sampling can provide higher accuracy than that developed using the Sobol sequence directly for the window performance evaluation.
Saliency Based Hookworm and Infection Detection for Wireless Capsule Endoscop...IRJET Journal
This document presents a method for detecting hookworm infection and ulcers in wireless capsule endoscopy images using saliency-based segmentation. The proposed method uses multi-level superpixel segmentation followed by feature extraction of color and texture properties. A particle swarm optimization algorithm is then used to classify images as healthy or infected/ulcerous based on the extracted features. Experimental results on capsule endoscopy images demonstrate the effectiveness of the proposed method at automatically detecting abnormalities in an efficient and non-invasive manner.
IRJET- Study of Vehicular Exhaust Emission EstimationIRJET Journal
This study examined vehicular exhaust emissions using experimental testing and the COPERT emissions model. Researchers measured carbon monoxide (CO), hydrocarbon (HC), and nitrogen oxide (NOx) emissions from vehicle exhaust under different operating conditions, including idling and driving cycles. Emissions were compared to COPERT estimates. While CO and HC emissions correlated well with COPERT, measured NOx levels were higher. The study found CO emissions were overestimated by COPERT, while NOx was underestimated. Critical vehicle speeds that impact CO and NOx variations were identified. The research provides insight into exhaust emissions and helps validate the COPERT model.
This is a presentation giving an introduction to the LandGEM model released by USEPA. It takes the student through a quick case study of the Pirana Landfill in India.
Meant for educational purposes only.
This document summarizes two projects aimed at improving emissions inventories for Ireland. The first project aims to develop accurate emission factors for NOx and particulate matter from small combustion installations like residential heating. Researchers are taking real-world measurements using testing equipment on a combustion rig with an Irish duty cycle. The second project aims to estimate road vehicle emissions using methods like ARTEMIS, which models emissions based on driving cycles and vehicle parameters. Researchers are collecting real-world driving data using onboard diagnostics loggers to input into emissions models.
- The document discusses the need to update the anthropogenic and biogenic emissions datasets used in CAMS simulations and analyses.
- It describes work being done to develop new, higher resolution global and European anthropogenic emissions inventories based on recent developments and incorporating temporal and speciation information.
- Issues with current datasets like inconsistent species emissions and lack of recent years are noted. Developing updated datasets exceeds current contract scopes.
- Evaluation of inverse modeling results and recommendations for improving emissions specification, resolution and temporal profiles are discussed.
The document discusses integrating European TCCON sites into the ICOS Research Infrastructure. TCCON provides complementary and reference data for satellite validation by measuring total column concentrations of greenhouse gases. Currently, European TCCON sites rely on short-term project funding, threatening continued operations. Integrating sites into ICOS could provide long-term funding and support from a proposed Atmospheric Column Thematic Centre. This would help preserve the network while centralizing quality control, data handling, and supporting innovation to improve satellite validation and calibration efforts. However, some open issues around instrumentation, data exchange, and national funding commitments would still need to be addressed.
This document presents a study applying a General Finite Line Source Model (GFLSM) to predict hydrocarbon concentrations on the M50 motorway in Ireland. The study compares monitored hydrocarbon data to concentrations modeled by GFLSM and the CALINE4 model at receptor points along the motorway. Results show that at distances of 25 meters and 120 meters from the road, GFLSM predictions matched monitored data better than CALINE4. At 240 meters, both models showed similar accuracy compared to monitored data. Overall, the study found GFLSM to be a satisfactory model for predicting vehicular air pollution along the M50 motorway.
Presentation given by Dr Zia Wadud at the18th World Conference of the Air Transport Research Society, Bordeaux, France, July 2014.
atrs2014.org
www.its.leeds.ac.uk/people/z.wadud
Routes to Clean Air 2016 - Dr Norbert Ligterink - TNOIES / IAQM
Talk title: NOx and NO2 Emissions of diesel vehicles.
Routes to Clean Air is a two-day conference from the IAQM where academics, professionals and policy makers share their experiences of improving traffic emissions.
This event highlights the importance of public communication and behavioural change surrounding road transport and air quality issues.
At the 2014 annual Dispersion Modellers user group meeting guest speaker Christine McHugh spoke on the topic: 'Comparison of Air Quality in World Cities'
1) There are uncertainties in projecting future vehicle emission factors due to an inability to predict real-world driving behavior and how it may differ from emission standards testing.
2) Comparing older COPERT II and updated COPERT IV models, along with updated activity and emission factor data, shows higher NOx emissions than targets in Germany and Ireland.
3) The differences between actual emissions and original targets result from both higher emission factors, due to real-world driving emitting more than tests, and higher activity levels, due to misallocated vehicle types and usage assumptions.
Effects of Errors on Vehicle Emission Rates from Portable Emissions Measure...Gurdas Sandhu
1. The document discusses research on quantifying the effects of errors in measurements from portable emissions measurement systems (PEMS) on reported vehicle emission rates.
2. It outlines methods to calculate mass per time emission rates from 1 Hz sensor data and evaluate the sensitivity of rates to errors in parameters like engine RPM, manifold pressure, and exhaust gas concentrations.
3. It also describes approaches to synchronize data streams from multiple instruments, including visual alignment of indicator variables and computational optimization of correlation between variables.
Routes to Clean Air 2016 - Dr Christine McHugh & Marilena Karyampa, ArupIES / IAQM
Talk title: The PCM Model and Modelling Uncertainty
Routes to Clean Air is a two-day conference from the IAQM where academics, professionals and policy makers share their experiences of improving traffic emissions.
This event highlights the importance of public communication and behavioural change surrounding road transport and air quality issues.
At the 2014 annual Dispersion Modellers user group meeting guest speaker James Tate spoke the topic: 'Making better use of microsimulation models for estimating vehicle emissions'
Signalized Intersections (Transportation Engineering)Hossam Shafiq I
This document provides an overview of signalized intersection analysis and optimization for a transportation engineering course. It defines key terms related to signal timing, describes methods for calculating vehicle delay under uniform and random traffic arrivals, and approaches for optimizing cycle length, green time allocation, and level of service. Examples are provided to illustrate calculations for critical lane group volume-to-capacity ratio, total lost time, optimal signal timing, green time distribution, and intersection level of service.
This project studies carbonyl sulfide (COS) as a way to observe the climate system. It has received funding from the European Research Council. The project uses inverse modeling and global modeling to understand the global COS budget and exchange with the biosphere, with large uncertainties currently. Measurements of COS and its isotopologues are being made from air samples collected by balloons and AirCore samplers. A balloon flight called HEMERA is planned for 2021 to sample the stratosphere up to 35km. The goals are to better understand the effect of COS on stratospheric sulfate aerosols and its role in tropospheric chemistry and exchange with ecosystems.
At the 2014 annual Dispersion Modellers user group meeting guest speaker Sean Beevers spoke on the topic: 'Update on progress with the development of a hybrid personal exposure model'
Presentation by Dr James Tate at Institute of Air Quality Management (IAQM) Dispersion Modellers User Group December 2014.
www.its.leeds.ac.uk/people/j.tate
http://iaqm.co.uk/event/dmug-2014/
This paper applies inverse transform sampling to sample training points for surrogate models. Inverse transform sampling uniformly generates a sequence of real numbers ranging from 0 to 1 as the probabilities at sample points. The coordinates of the sample points are evaluated using the inverse functions of Cumulative Distribution Functions (CDF). The inputs to surrogate models are assumed to be independent random variables. The sample points obtained by inverse transform sampling can effectively represent the frequency of occurrence of the inputs. The distributions of inputs to the surrogate models are fitted to their observed data. These distributions are used for inverse transform sampling. The sample points have larger densities in the regions where the Probability Density Functions (PDF) are higher. This sampling approach ensures that the regions with higher densities of sample points are more prevalent in the observations of the random variables. Inverse transform sampling is applied to the development of surrogate models for window performance evaluation. The distributions of the following three climatic conditions are fitted: (i) the outside temperature, (ii) the wind speed, and (iii) the solar radiation. The sample climatic conditions obtained by the inverse transform sampling are used as training points to evaluate the heat transfer through a generic triple pane window. Using the simulation results at the sample points, surrogate models are developed to represent the heat transfer through the window as a function of the climatic conditions. It is observed that surrogate models developed using the inverse transform sampling can provide higher accuracy than that developed using the Sobol sequence directly for the window performance evaluation.
Saliency Based Hookworm and Infection Detection for Wireless Capsule Endoscop...IRJET Journal
This document presents a method for detecting hookworm infection and ulcers in wireless capsule endoscopy images using saliency-based segmentation. The proposed method uses multi-level superpixel segmentation followed by feature extraction of color and texture properties. A particle swarm optimization algorithm is then used to classify images as healthy or infected/ulcerous based on the extracted features. Experimental results on capsule endoscopy images demonstrate the effectiveness of the proposed method at automatically detecting abnormalities in an efficient and non-invasive manner.
Spectral opportunity selection based on the hybrid algorithm AHP-ELECTRETELKOMNIKA JOURNAL
Due to an ever-growing demand for spectrum and the fast-paced developmentof wireless applications, technologies such as cognitive radio enablethe efficient use of the spectrum. The objective of the present article is todesign an algorithm capable of choosing the best channel for data transmission.It uses quantitative methods that can modify behavior by changing qualityparameters in the channel. To achieve this task, a hybrid decision-makingalgorithm is designed that combinesanalytical hierarchy process(AHP)algorithms and adjusts the weights of each channel parameter, using a prioritytable. TheElimination Et Choix Tranduisant La Realité(ELECTRE)algorithm processes the information from each channel through a weightmatrix and then delivers the most favorable result for the transmitted data. Theresults reveal that the hybrid AHP-ELECTRE algorithm has a suitableperformance, which improves the throughput rate by 14% compared to similaralternatives.
Approximation models (or surrogate models) provide an efficient substitute to expen- sive physical simulations and an efficient solution to the lack of physical models of system behavior. However, it is challenging to quantify the accuracy and reliability of such ap- proximation models in a region of interest or the overall domain without additional system evaluations. Standard error measures, such as the mean squared error, the cross-validation error, and the Akaikes information criterion, provide limited (often inadequate) informa- tion regarding the accuracy of the final surrogate. This paper introduces a novel and model independent concept to quantify the level of errors in the function value estimated by the final surrogate in any given region of the design domain. This method is called the Re- gional Error Estimation of Surrogate (REES). Assuming the full set of available sample points to be fixed, intermediate surrogates are iteratively constructed over a sample set comprising all samples outside the region of interest and heuristic subsets of samples inside the region of interest (i.e., intermediate training points). The intermediate surrogate is tested over the remaining sample points inside the region of interest (i.e., intermediate test points). The fraction of sample points inside region of interest, which are used as interme- diate training points, is fixed at each iteration, with the total number of iterations being pre-specified. The estimated median and maximum relative errors within the region of in- terest for the heuristic subsets at each iteration are used to fit a distribution of the median and maximum error, respectively. The estimated statistical mode of the median and the maximum error, and the absolute maximum error are then represented as functions of the density of intermediate training points, using regression models. The regression models are then used to predict the expected median and maximum regional errors when all the sample points are used as training points. Standard test functions and a wind farm power generation problem are used to illustrate the effectiveness and the utility of such a regional error quantification method.
This document discusses intelligent traffic light control using multi-agent reinforcement learning. It summarizes three research papers on the topic. The first paper proposes a distributed Q-learning approach that considers both motorized and non-motorized traffic to achieve near-global optimization. The second designs a two-stage negotiation system where traffic lights determine green times based on real-time traffic conditions. The third applies particle swarm optimization to find optimal light cycles for large vehicular networks under various scenarios.
The document discusses using big data technologies for environmental forecasting and climate prediction at the Barcelona Supercomputing Center (BSC). It outlines three key areas: 1) Developing capabilities for air quality forecasting using data streaming; 2) Implementing simultaneous analytics and high-performance computing for climate predictions; 3) Developing analytics as a service using platforms like the Earth System Grid Federation to provide climate data and services to users. The BSC is working on several projects applying big data, including operational air quality and dust forecasts, high-resolution city-scale air pollution modeling, and decadal climate predictions using workflows and remote data analysis.
Statistical Technique in Gas Dispersion Modeling Based on Linear InterpolationTELKOMNIKA JOURNAL
In this paper, we introduced statistical techniques in creating a gas dispersion model in an indoor with a controlled environment. The temperature, air-wind and humidity were constant throughout the experiment. The collected data were then treated as an image; which the pixel size is similar to the total data available for x and y axis. To predict the neighborhood value, linear interpolation technique was implemented. The result of the experiment is significantly applicable in extending the total amount of data if small data is available.
The determination of complex underlying relationships between system parameters from simulated and/or recorded data requires advanced interpolating functions, also known as surrogates. The development of surrogates for such complex relationships often requires the modeling of high dimensional and non-smooth functions using limited information. To this end, the hybrid surrogate modeling paradigm, where different surrogate models are aggregated, offers a robust solution. In this paper, we develop a new high fidelity surro- gate modeling technique that we call the Reliability Based Hybrid Functions (RBHF). The RBHF formulates a reliable Crowding Distance-Based Trust Region (CD-TR), and adap- tively combines the favorable characteristics of different surrogate models. The weight of each contributing surrogate model is determined based on the local reliability measure for that surrogate model in the pertinent trust region. Such an approach is intended to ex- ploit the advantages of each component surrogate. This approach seeks to simultaneously capture the global trend of the function and the local deviations. In this paper, the RBHF integrates four component surrogate models: (i) the Quadratic Response Surface Model (QRSM), (ii) the Radial Basis Functions (RBF), (iii) the Extended Radial Basis Functions (E-RBF), and (iv) the Kriging model. The RBHF is applied to standard test problems. Subsequent evaluations of the Root Mean Squared Error (RMSE) and the Maximum Ab- solute Error (MAE), illustrate the promising potential of this hybrid surrogate modeling approach.
Online learning in estimation of distribution algorithms for dynamic environm...André Gonçalves
This document proposes a new estimation of distribution algorithm called EDAOGMM that uses an online Gaussian mixture model to optimize problems in dynamic environments. EDAOGMM adapts its internal model through online learning as the environment changes. It was tested on benchmark dynamic optimization problems and outperformed other state-of-the-art algorithms, especially in high-frequency changing environments. Future work includes improving EDAOGMM's ability to avoid premature convergence and further experimental testing.
Forecasting Municipal Solid Waste Generation Using a Multiple Linear Regressi...IRJET Journal
- The document describes developing a multiple linear regression model to forecast municipal solid waste generation based on factors like population, population density, education levels, access to services, and income levels.
- The model was developed using data from various municipalities in Italy. Exploratory data analysis was conducted to determine linear relationships between waste generation and predictors.
- The linear regression model achieved a high R-squared value of 91.81%, indicating a close fit to the data. Various error metrics like MAE, MSE and RMSE were calculated to evaluate model performance.
- The regression model provides a simple yet accurate means of predicting municipal solid waste that requires minimal data and can be generalized to other locations.
1) Edward Robson developed a model to integrate economic evaluation of transport network changes with transport demand modeling to allow for rapid assessments of consumer benefits.
2) The model calculates consumer surplus for each origin-destination pair based on changes in generalized costs between the existing and proposed networks using a logsum formula.
3) The model was tested on a proposal to add a metro network to Sydney, finding an estimated increase in consumer surplus of $63,997 per morning commute period according to the logsum calculation.
Parametric estimation of construction cost using combined bootstrap and regre...IAEME Publication
The document discusses a method for estimating construction costs using a combined bootstrap and regression technique. It involves using historical project data to develop a regression model relating cost to key parameters. A bootstrap resampling method is then used to generate multiple simulated datasets from the original. Regression analysis is performed on each resampled dataset to calculate coefficients and develop a cost range estimate that captures uncertainty. This allows integrating probabilistic and parametric estimation methods while requiring fewer assumptions than traditional statistical techniques. The goal is to provide more accurate conceptual cost estimates early in projects when design information is limited.
This document summarizes a study that applied a bi-objective optimization approach called the corridor observations method to solve the environmental and economic dispatch (EED) problem in power systems. The EED problem involves minimizing both fuel costs and gas emissions from power plants, subject to operational constraints. The proposed method uses an evolutionary algorithm to find the optimal Pareto front of non-dominated solutions by segmenting the objective space into corridors. It then identifies the best solutions in each corridor to build an archive of non-dominated solutions. Testing on sample power systems with 3, 6, 10 and 15 generating units showed the corridor observations method obtained higher quality Pareto fronts in less time compared to other evolutionary algorithms.
The document describes a project applying machine learning techniques to forecast bike rental demand using the Capital Bikeshare program in Washington D.C. Multiple techniques are evaluated including linear regression, lasso regression, elastic net, ensemble learning, neural networks and local linear regression. Ensemble learning with regularized bagging had the best performance with a root mean squared logarithmic error of 0.63302 on validation data. Further tuning of methods and additional analysis of features could potentially improve predictions.
Approximation of Dynamic Convolution Exploiting Principal Component Analysis:...a3labdsp
In recent years, several techniques have been proposed in the literature in order to attempt the emulation of nonlinear electro-acoustic devices, such as compressors, distortions, and preamplifiers. Among them, the dynamic convolution technique is one of the most common approaches used to perform this task. In this paper an exhaustive objective and subjective analysis of a dynamic convolution operation based on principal components analysis has been performed. Taking into consideration real nonlinear systems, such as bass preamplifier, distortion, and compressor, comparisons with the existing techniques of the state of the art have been carried out in order to prove the effectiveness of the proposed approach.
Fault diagnosis using genetic algorithms and principal curveseSAT Journals
Abstract Several applications of nonlinear principal component analysis (NPCA) have appeared recently in process monitoring and fault diagnosis. In this paper a new approach is proposed for fault detection based on principal curves and genetic algorithms. The principal curve is a generation of linear principal component (PCA) introduced by Hastie as a parametric curve passes satisfactorily through the middle of data. The existing principal curves algorithms employ the first component of the data as an initial estimation of principal curve. However the dependence on initial line leads to a lack of flexibility and the final curve is only satisfactory for specific problems. In this paper we extend this work in two ways. First, we propose a new method based on genetic algorithms to find the principal curve. Here, lines are fitted and connected to form polygonal lines (PL). Second, potential application of principal curves is discussed. An example is used to illustrate fault diagnosis of nonlinear process using the proposed approach. Index Terms: Principal curve, Genetic Algorithm, Nonlinear principal component analysis, Fault detection.
DEA is a non-parametric technique used to measure the relative efficiency of decision making units (DMUs) that use multiple inputs to produce multiple outputs. It works by constructing a production frontier boundary comprised of the most efficient DMUs to evaluate how efficiently other DMUs use inputs to produce outputs. The methodology was originally developed in 1978 and has since been applied in various industries to evaluate organizations, identify best practices, and determine potential efficiency improvements for inefficient units.
14:30 An Industry Perspective on Applying SEPA’s Updated Guidance to Water En...IES / IAQM
An Industry Perspective on Applying SEPA’s Updated Guidance to Water Environment Risk Assessment in Scotland; including how this further diverges from the approach across the rest of the UK and the associated challenges
Breakout session Tuesday, February 11 at 10:30 a.m.
Kona SWCD to present their innovated ideas and the process it took to get where we are today. A presentation to highlight where we were, where we are, and where we would like to continue, including the steps it took to get there and the partnership and communication that is necessary.
Speaker: Robin Hill, Kona Soil and Water Conservation District
Plant-based bioplastics are emerging as a sustainable alternative to traditional plastics, offering an eco-friendly solution to the global plastic pollution crisis. This presentation explores the concept of bioplastics, which are derived from renewable plant sources such as algae, cornstarch, and cellulose. Unlike petroleum-based plastics, bioplastics have the potential to be biodegradable, reducing environmental pollution and dependency on fossil fuels.
The presentation provides a detailed overview of different types of plant-based bioplastics, including Starch-Based Plastics, Cellulose-Based Plastics, Polylactic Acid (PLA), Polyhydroxyalkanoates (PHAs), Protein-Based Plastics, and Bio-derived Polyethylene (PE). It explains the production process, starting from raw plant materials to the final bioplastic products used in various industries. While bioplastics offer significant advantages such as biodegradability, reduced carbon footprint, and renewable sourcing, they also present challenges, including higher production costs, limited durability, and the need for improved waste management systems.
This presentation also highlights real-world applications where companies and industries are adopting bioplastics for sustainable product development. From packaging and consumer electronics to automotive components and medical uses, bioplastics are gaining traction as a viable alternative to conventional plastics. Additionally, the future of bioplastics is explored, with insights into technological innovations, market growth, and policy support for a greener planet.
As the world moves towards sustainability, plant-based bioplastics play a crucial role in reducing plastic waste and promoting environmental conservation. This presentation aims to raise awareness about the benefits, limitations, and future potential of bioplastics, encouraging individuals, businesses, and policymakers to explore and adopt sustainable solutions.
Typology for diversity in landscape approaches – Preliminary findings PBL
Landscape approaches are seen as potential but complex strategies to achieve systemic change in the nexus of climate, biodiversity, food, water, energy and livelihoods. This transdisciplinary, integrated space-place based approach requires context specific strategies at multiple levels. International NGO’s, mainly from conservation and development perspectives have increasingly initiated landscape approaches across the globe, in partnerships with governments, communities, knowledge organisations, private sector and other stakeholders in the last two decades. These experiences have led to different understandings of how landscape approaches should and could be implemented, definitions and perceptions of impacts of a landscape approach.
Transformative change refers to fundamental, system-wide reorganisation across technological, economic and social factors, including paradigms, goals and values, recognising the depth, breadth and dynamics of system reorganisation. Depth refers to change that goes beyond addressing the symptoms of environmental change or their proximate drivers, such as new technologies, incentive systems or protected areas, to include changes to underlying drivers, including consumption preferences, beliefs, ideologies and social inequalities . Understanding the impacts and potential of landscape approaches means understanding and identifying factors in human society at both the individual and collective levels, including behavioural, social, cultural, economic, institutional, technical and technological dimensions, that may be leveraged to bring about transformative change at landscape scale, for different and multiple social, economic and environmental goals in the context of sustainable development.
As part of the TC4BE project led by Wageningen UR in partnership with the Global Landscapes Forum (GLF) CIFOR- ICRAF, this one day dialogue brings together grounded experience from practitioners with academic evidence from researchers on the transformative potential of landscape approaches and its practice, and funding and policy implications. We will exchange on:
1. What’s working (or not) and via which change pathways – with what impacts and outcomes, and for who? (Addressing questions of power, framing, hegemony, inclusiveness, legitimacy, equity, trade-offs, spillage and synergies)
2. How transformative are landscape approaches?
3. What understandings and lessons can we distil from our experiences and research?
4. Where are knowledge, research, competence and capacity gaps?
Wageningen 19 March 2025
James Reed CIFOR-ICRAF - Wageningen landscapes dialogue 19032025.pdfVerina Ingram
Critical reflections on landscape approaches and principles, and evidence of effectiveness
Landscape approaches are seen as potential but complex strategies to achieve systemic change in the nexus of climate, biodiversity, food, water, energy and livelihoods. This transdisciplinary, integrated space-place based approach requires context specific strategies at multiple levels. International NGO’s, mainly from conservation and development perspectives have increasingly initiated landscape approaches across the globe, in partnerships with governments, communities, knowledge organisations, private sector and other stakeholders in the last two decades. These experiences have led to different understandings of how landscape approaches should and could be implemented, definitions and perceptions of impacts of a landscape approach.
Transformative change refers to fundamental, system-wide reorganisation across technological, economic and social factors, including paradigms, goals and values, recognising the depth, breadth and dynamics of system reorganisation. Depth refers to change that goes beyond addressing the symptoms of environmental change or their proximate drivers, such as new technologies, incentive systems or protected areas, to include changes to underlying drivers, including consumption preferences, beliefs, ideologies and social inequalities . Understanding the impacts and potential of landscape approaches means understanding and identifying factors in human society at both the individual and collective levels, including behavioural, social, cultural, economic, institutional, technical and technological dimensions, that may be leveraged to bring about transformative change at landscape scale, for different and multiple social, economic and environmental goals in the context of sustainable development.
As part of the TC4BE project led by Wageningen UR in partnership with the Global Landscapes Forum (GLF) CIFOR- ICRAF, this one day dialogue brings together grounded experience from practitioners with academic evidence from researchers on the transformative potential of landscape approaches and its practice, and funding and policy implications. We will exchange on:
1. What’s working (or not) and via which change pathways – with what impacts and outcomes, and for who? (Addressing questions of power, framing, hegemony, inclusiveness, legitimacy, equity, trade-offs, spillage and synergies)
2. How transformative are landscape approaches?
3. What understandings and lessons can we distil from our experiences and research?
4. Where are knowledge, research, competence and capacity gaps?
Wageningen UR 19032025
Alfa Sustainable Projects Ltd., Israel’s leading sustainable design and engineering firm, achieved Global Network for Zero's net zero certification for its own office, Alfa Campus, in 2024. This was the second project in the world and the first in Israel to achieve GNFZ certification.
Introducing transformative change and landscape approaches.
Landscape approaches are seen as potential but complex strategies to achieve systemic change in the nexus of climate, biodiversity, food, water, energy and livelihoods. This transdisciplinary, integrated space-place based approach requires context specific strategies at multiple levels. International NGO’s, mainly from conservation and development perspectives have increasingly initiated landscape approaches across the globe, in partnerships with governments, communities, knowledge organisations, private sector and other stakeholders in the last two decades. These experiences have led to different understandings of how landscape approaches should and could be implemented, definitions and perceptions of impacts of a landscape approach.
Transformative change refers to fundamental, system-wide reorganisation across technological, economic and social factors, including paradigms, goals and values, recognising the depth, breadth and dynamics of system reorganisation. Depth refers to change that goes beyond addressing the symptoms of environmental change or their proximate drivers, such as new technologies, incentive systems or protected areas, to include changes to underlying drivers, including consumption preferences, beliefs, ideologies and social inequalities . Understanding the impacts and potential of landscape approaches means understanding and identifying factors in human society at both the individual and collective levels, including behavioural, social, cultural, economic, institutional, technical and technological dimensions, that may be leveraged to bring about transformative change at landscape scale, for different and multiple social, economic and environmental goals in the context of sustainable development.
As part of the TC4BE project led by Wageningen UR in partnership with the Global Landscapes Forum (GLF) CIFOR- ICRAF, this one day dialogue brings together grounded experience from practitioners with academic evidence from researchers on the transformative potential of landscape approaches and its practice, and funding and policy implications. We will exchange on:
1. What’s working (or not) and via which change pathways – with what impacts and outcomes, and for who? (Addressing questions of power, framing, hegemony, inclusiveness, legitimacy, equity, trade-offs, spillage and synergies)
2. How transformative are landscape approaches?
3. What understandings and lessons can we distil from our experiences and research?
4. Where are knowledge, research, competence and capacity gaps?
Wageningen 19 March 2025
Milena Engel & David Tsfonas Commonland - Wageningen Landscape approach dial...Verina Ingram
Bridging Knowledge Gaps in Landscape Restoration: Commonland’s Evidence Gap Map Approach
Landscape approaches are seen as potential but complex strategies to achieve systemic change in the nexus of climate, biodiversity, food, water, energy and livelihoods. This transdisciplinary, integrated space-place based approach requires context specific strategies at multiple levels. International NGO’s, mainly from conservation and development perspectives have increasingly initiated landscape approaches across the globe, in partnerships with governments, communities, knowledge organisations, private sector and other stakeholders in the last two decades. These experiences have led to different understandings of how landscape approaches should and could be implemented, definitions and perceptions of impacts of a landscape approach.
Transformative change refers to fundamental, system-wide reorganisation across technological, economic and social factors, including paradigms, goals and values, recognising the depth, breadth and dynamics of system reorganisation. Depth refers to change that goes beyond addressing the symptoms of environmental change or their proximate drivers, such as new technologies, incentive systems or protected areas, to include changes to underlying drivers, including consumption preferences, beliefs, ideologies and social inequalities . Understanding the impacts and potential of landscape approaches means understanding and identifying factors in human society at both the individual and collective levels, including behavioural, social, cultural, economic, institutional, technical and technological dimensions, that may be leveraged to bring about transformative change at landscape scale, for different and multiple social, economic and environmental goals in the context of sustainable development.
As part of the TC4BE project led by Wageningen UR in partnership with the Global Landscapes Forum (GLF) CIFOR- ICRAF, this one day dialogue brings together grounded experience from practitioners with academic evidence from researchers on the transformative potential of landscape approaches and its practice, and funding and policy implications. We will exchange on:
1. What’s working (or not) and via which change pathways – with what impacts and outcomes, and for who? (Addressing questions of power, framing, hegemony, inclusiveness, legitimacy, equity, trade-offs, spillage and synergies)
2. How transformative are landscape approaches?
3. What understandings and lessons can we distil from our experiences and research?
4. Where are knowledge, research, competence and capacity gaps?
Disk Drill Pro 5.4.845.0 Crack With Activation Code [Latest 2025]abidkhan77g77
https://crackedios.com/after-verification-click-go-to-download-page/
Disk Drill is an effective data recovery application for PC that enables you to scan and recover lost files thoroughly. This handy recovery tool can retrieve data loss caused by accidental deletion, power failure, damage to PC bootups, virus attacks, and more.
What if we told you that we’re sitting on a ticking carbon time bomb—and we’re the ones lighting the fuse? Wetlands, nature’s hidden warriors against climate change, hold 30% of the world’s soil carbon, locking away greenhouse gases for centuries. But the moment we drain, burn, or destroy them, we unleash massive carbon emissions, turning these vital ecosystems into climate threats.
Each acre lost isn’t just a habitat destroyed—it’s a floodgate of CO₂ and methane bursting into our atmosphere, accelerating global warming at breakneck speed. Peatlands, which store twice as much carbon as all the world's forests combined, are being sacrificed for agriculture, urban expansion, and industry. The result? Fires, rising temperatures, extreme weather, and a vicious cycle that’s pushing our planet to the edge.
But here's the catch—we can still stop it. By rewetting drained wetlands, halting peatland destruction, and using AI-powered satellite tracking to protect these landscapes, we can reverse the damage before it’s too late. The battle isn’t just about saving wetlands; it’s about saving ourselves.
🚨 Will we defuse the carbon bomb before it’s too late? Or will we watch it explode? 🌍💨
Amy Stidworthy - Optimising local air quality models with sensor data - DMUG17
1. Amy Stidworthy
DMUG
6h April 2017
London
Optimising local air quality models
with sensor data: examples from
Cambridge
2. DMUG, London, 6th April 2017
Acknowledgements
• The work presented here has been done by CERC based on
ADMS-Urban modelling of Cambridge following the
deployment of AQMesh sensors in Cambridge. Partners
include:
– Rod Jones & Lekan Popoola, Department of Chemistry, University
of Cambridge
– Dan Clarke, Cambridgeshire County Council, Cambridge
– Jo Dicks & Anita Lewis, Cambridge City Council, Cambridge
– Ian Leslie, Computer Laboratory, University of Cambridge
– Amanda Randle, AQMesh
3. DMUG, London, 6th April 2017
Outline of presentation
• Motivation
• Optimisation technique
• Preliminary results for Cambridge
• Further work
4. DMUG, London, 6th April 2017
Motivation
• Emissions errors account for a significant proportion of
dispersion model error
• Traditionally, dispersion models such as CERC’s ADMS-Urban
model are validated against data from reference monitors:
– Modellers either use the validation to improve model setup; or
– Calculate and apply a model adjustment factor to model results
• New low cost air pollution sensors allow large networks of
sensors to be installed across a city
• Accuracy and reliability is generally lower than reference
monitors, but larger spatial coverage is possible
• How can we best use these sensor data in modelling?
• If the data are not accurate and reliable enough for model
validation, maybe we can use the data in a different way...
5. DMUG, London, 6th April 2017
Optimisation technique: Overview
• The aim is to develop an inversion technique to use monitoring
data from a network of sensors to automatically adjust
emissions to improve model predictions
• Basic idea:
– Run ADMS-Urban to obtain modelled concentrations at monitor
locations in the normal way
– Use these modelled concentrations and their associated
emissions as a ‘first guess’, together with
a) monitored concentration data
b) information about the error in the monitored data and the proportion
of that error that is systematic across all monitors
c) Information about the error in the emissions data and the proportion
of that error that is systematic across all sources
– Use an inversion technique to calculated an adjusted set of
emissions that reduces error in the modelled concentrations
6. DMUG, London, 6th April 2017
Optimisation technique: Introduction
• There are some conditions that have to be satisfied for such a
scheme to work:
a) The model concentration must be proportional to the emissions,
which means that complex effects like chemistry have to be
ignored
b) Any sources included must affect at least one receptor (monitor)
c) Any receptor included must have non-zero concentration
• The technique developed uses a probabilistic approach
following work by others, for example as used by the Met
Office for estimating volcanic ash source parameters using
satellite retrievals [Webster et al, 2016]
7. DMUG, London, 6th April 2017
Optimisation technique: Cost function
We define a cost function J(x) with two terms: one that describes
the error in the modelled concentration (left-hand term) and one
that describes the error in the emissions (right-hand term)
The aim is to minimise J to obtain x, a vector of adjusted
emissions.
Quantity Definition Dimensions
x Vector of emissions (result) n
M Transport matrix relating the source term to the observations n by k
y Vector of observations k
R Error covariance matrix for the observations k by k
e Vector of first guess emissions n
B Error covariance matrix for the first guess emissions n by n
exBexyMxRyMxx 11 TT
J
8. DMUG, London, 6th April 2017
Optimisation technique: Least squares problem
• To solve the cost function minimisation problem, we first convert
the problem to a ‘least squares’ problem, which is easier to solve
computationally
• A ‘least squares’ problem finds the best solution to the equation
Ax=f, where x is a vector of size m, f is a vector of size n and A is
a matrix with n rows and m columns.
• The result of solving the least squares problem is the vector x
that gives the minimum value of the sum of the squares of the
elements of (Ax-f)
• So, we need to write the cost function as
Fast forward through the maths...
fAxfAx
T
xJ
9. DMUG, London, 6th April 2017
Optimisation technique: Error covariance matrices
• To solve the problem, we need to construct the matrix A and the vector
f, but do we have all the information we need for this?
M is the transport matrix: this represents the contribution of every
source to every receptor given a unit emission rate
y is the vector of monitored concentrations at each receptor
e is the vector of emissions for each source
• What are the matrices T and D?
These are the related to the ‘covariance’ matrices R and B that
represent the error in the monitored data and emissions data
respectively.
The diagonal components of the covariance matrices represent the
variance in the data, which is related to the uncertainty in the data;
The off-diagonal components represent how much of the error is ‘co-
varying’, or in other words, systematic.
eD
yT
f
D
MT
AfAxfAx T
T
T
T
T
xJ and
10. DMUG, London, 6th April 2017
Optimisation technique: Monitoring data error
• The diagonal components of the monitoring data error
covariance matrix R represent the variance σObs
2 of the
monitored data, which is the square of the standard deviation
σObs:
– We assume that the standard deviation σObs is equal to the
uncertainty in the monitoring data expressed as a concentration
and is equal to Uobs x O, where Uobs is the uncertainty expressed
as a fraction.
• The off-diagonal components represent the error that co-varies
between monitors, i.e. systematic error
– We say that a given proportion of the uncertainty is due to
systematic error
11. DMUG, London, 6th April 2017
Optimisation technique: Monitoring data error
• So, for any two monitors labelled i and j, their covariance is
defined as
• The factor UfObs represents the fraction of the monitoring data
uncertainty that is due to systematic error.
• This raises questions:
– How much of monitoring data error is systematic?
– Should monitors of different types be treated as independent, with
no co-variance?
– Are there any causes of monitoring data error that affect all
monitors, e.g. temperature, humidity?
– Is there co-variance between sensors for different pollutants?
jijyUUfiyUUf
jiiyU
ji
ObsObsObsObs
Obs
Obs
2
2
,
12. DMUG, London, 6th April 2017
Optimisation technique: Emissions data error
• The diagonal components of the emissions error covariance
matrix B represent the variance σEm
2 of the emissions data,
which is the square of the standard deviation σEm:
– We assume that the standard deviation σEm is equal to the
uncertainty in the emissions data expressed as a concentration
and is equal to Uem x E, where Uem is the uncertainty expressed
as a percentage.
• The off-diagonal components represent the error that co-varies
between sources, i.e. systematic error
– We say that a given proportion of the uncertainty is due to
systematic error, for example traffic emissions factors
13. DMUG, London, 6th April 2017
Optimisation technique: Emissions data error
• So, for any two sources labelled i and j, their covariance is
defined as
• The factor UfEm represents the fraction of the emissions data
uncertainty that is due to systematic error.
• This also raises questions:
– How much of the emissions data error is systematic? For
example, what proportion of road emissions data is due to errors
in the emission factors (systematic) and how much is due to traffic
counts (non-systematic)
– Is there any co-variance in the emissions data error for different
pollutants? PM10 and PM2.5 – yes, but PM10 and NOX?
jijeUUfieUUf
jiieU
ji
EmEmEmEm
Em
Em
2
2
,
14. DMUG, London, 6th April 2017
Preliminary results: Cambridge
• CERC have been collaborating on a project to study
ambient air quality across Cambridge using a large
number of sensor nodes and computer modelling.
• 20 AQMesh sensor pods have been placed at key
points around Cambridge, measuring air quality in near
real time.
15. DMUG, London, 6th April 2017
Preliminary results: Cambridge
• The aim of the preliminary Cambridge tests presented here is
primarily to examine the behaviour of the optimisation scheme
and refine the process, i.e.
– Does it work?!
– Is it practical? If it takes weeks to run then obviously not.
– What effect does the choice of uncertainty parameters have on
outcome?
– How does the validation at the reference monitors change?
– Can we learn anything about emissions?
16. DMUG, London, 6th April 2017
ADMS-Urban model setup
• One source type: 305 road sources
• One pollutant: NOX
• 25 monitors: 20 AQMesh monitors and 5 reference monitors
• Time-varying emission factors: diurnal profiles for weekdays,
Saturdays and Sundays
• Daylight saving option used to obtain correct emission factors
• 3-month period: 30/06/2016 01:00 to 30/09/2016 23:00
17. DMUG, London, 6th April 2017
Optimisation process
Step 1: Run ADMS-Urban to obtain modelled
concentrations at monitoring site locations
Step 2: Form the transport matrix, emissions
vector and monitored data vector
Step 3: Run the optimisation scheme
Step 4: Create an hourly factors (.hfc) file
from the adjusted emissions data
Step 5: Re-run ADMS-Urban using the
adjusted emissions .hfc file
18. DMUG, London, 6th April 2017
Optimisation parameters
• As described previously, we specify the following parameters
in the optimisation:
EU
Parameter name Description
Uobs(ref) Observation uncertainty (reference monitors)
Uobs(aqmesh) Observation uncertainty (AQmesh sensors)
Ufobs(ref) Observation uncertainty covariance factor
(reference monitors)
Ufobs(aqmesh) Observation uncertainty covariance factor
(AQmesh sensors)
Uem Emissions uncertainty
Ufem Emissions uncertainty covariance factor
19. DMUG, London, 6th April 2017
Optimisation Technique: Effect of uncertainty
Optimisation is working!
J is reduced for all uncertainty
values
Increasing Ou relaxes the
constraints so J is reduced less
20. DMUG, London, 6th April 2017
AQMesh sensors: more model error toleratedRef: less model
error tolerated
0
50
100
150
200
250
300
NOxconcentration(ug/m3)
Observed Model (original emissions) Model (adjusted emissions, all sensor data)
Effect of monitor uncertainty on concentrations
• In these inversion calculations:
– Reference monitor uncertainty set to 10%
– AQMesh sensor uncertainty set to 30%
– Covariance between Reference monitors (systematic error) set to 5%
– Covariance between AQMesh sensors (systematic error) set to 10%
– No covariance between Reference monitors and AQMesh sensors
Example hour: 7am on 5th July
21. DMUG, London, 6th April 2017
Effect of emissions covariance on adjusted emissions
-100%
-50%
0%
50%
100%
150%
200%
Percentage increase in source emission rate with different emissions
error covariance settings
Zero emissions error covariance
75% emissions error covariance
• If emissions error covariance is zero, emissions can
change completely independently
• With non-zero emissions error covariance, emissions
have to change more consistently across all sources
22. DMUG, London, 6th April 2017
Cambridge: optimisation parameters used
Parameter
name
Description Value
Uobs(ref) Observation uncertainty (reference monitors) 0.1
Uobs(aqmesh) Observation uncertainty (AQmesh sensors) 0.3
Ufobs(ref) Observation uncertainty covariance factor
(reference monitors)
0.05
Ufobs(aqmesh) Observation uncertainty covariance factor
(AQmesh sensors)
0.1
Uem Emissions uncertainty 0.5
Ufem Emissions uncertainty covariance factor 0.4
All monitoring data are provisional apart from Gonville Place
reference monitor; AQMesh data were obtained in real time.
23. DMUG, London, 6th April 2017
Effect of optimisation on model validation
Statistics 1. 2. 3.
Mean
Obs 31.2 31.2 31.2
Mod 34.5 29.3 31.3
StDev
Obs 27.9 27.9 27.9
Mod 31.0 26.0 27.0
MB 3.30 -1.91 0.10
NMSE 0.51 0.05 0.39
R 0.70 0.97 0.75
Fac2 0.71 0.94 0.73
1. Orig_RdsOnly
Base case model output
2. Inv_ReRun_AllSensors
Model output using optimised emissions;
optimisation carried out using all sensor data
3. Inv_ReRun_AQMeshSensorsOnly
Model output using optimised emissions;
optimisation carried out using AQMesh data
only
Validation at
Reference
sites only
Data points not
included in the
inversion
24. DMUG, London, 6th April 2017
Effect of optimisation on diurnal emissions profiles
0
0.5
1
1.5
2
2.5
0 2 4 6 8 10 12 14 16 18 20 22 0 2 4 6 8 10 12 14 16 18 20 22 0 2 4 6 8 10 12 14 16 18 20 22
Weekday Saturday Sunday
Emissionfactor
Diurnal emission factor profiles: original and adjusted emissions
Original
Adjusted, all sensors
Adjusted, AQMesh sensors only
25. DMUG, London, 6th April 2017
Effect of optimisation on mean emission rates
0
0.2
0.4
0.6
0.8
1
0 0.2 0.4 0.6 0.8 1
Meanadjustedemissionrate(g/km/s)
Mean first-guess emission rate
Optimisation using AQMesh
sensor data only
0
0.2
0.4
0.6
0.8
1
0 0.2 0.4 0.6 0.8 1
Meanadjustedemissionrate(g/km/s)
Mean first-guess emission rate
Optimisation using AQMesh
sensor data and reference
monitor data
26. DMUG, London, 6th April 2017
Effect of optimisation on mean emission rates
Average % change
-2.8%
-60.0%
-40.0%
-20.0%
0.0%
20.0%
40.0%
60.0%
Optimisation using AQMesh sensor data only
Percentage change in mean emission rate per road source
Average % change
-3.0%
-60.0%
-40.0%
-20.0%
0.0%
20.0%
40.0%
60.0%
Optimisation using AQMesh sensor data and reference monitor data
Including reference data causes
big changes in just a few sources
27. DMUG, London, 6th April 2017
Example output at reference monitors: 5th July 2016
Montague Rd
Regent St
Gonville Place
Parker St
Newmarket Rd
28. DMUG, London, 6th April 2017
7-day average concentration: Adjusted - Original
Example of how the
optimisation process
affects
concentration
contours:
General reduction,
but increase in
some areas
+30
-30
NOX ug/m3
0
29. DMUG, London, 6th April 2017
Discussion and further work
• We have developed an optimisation scheme to use data from a
network of sensors to automatically adjust emissions and thereby
improve model results
• Tests show that the scheme works and initial results are
encouraging, but there is more work to do, for example:
– More than 1 pollutant
– Other source types
• The optimisation scheme run times are also encouraging: approx 15
minutes to run 3 months of hourly data with 305 sources and 25
receptors, carrying out the optimisation for each individual hour
• The values of uncertainty and covariance factors used so far are
largely arbitrary; we need to use realistic values to obtain meaningful
results
• After Cambridge, the next step is to run the scheme with sensor data
collected at Heathrow during the NERC SNAQ project.
30. DMUG, London, 6th April 2017
Thank you
• Thanks again to CERC’s partners in this work:
– Rod Jones & Lekan Popoola, Department of Chemistry, University
of Cambridge
– Dan Clarke, Cambridgeshire County Council, Cambridge
– Jo Dicks & Anita Lewis, Cambridge City Council, Cambridge
– Ian Leslie, Computer Laboratory, University of Cambridge
– Amanda Randle, AQMesh
• For more information about the ADMS-Urban dispersion
model, see www.cerc.co.uk/Urban
Editor's Notes
#4: Explain that this is ongoing work and these are preliminary results – much more work to do!
#5: Traditionally, dispersion models are validated by comparing measured and modelled concentrations at well-established monitoring sites; at best, modellers manually refine the dispersion modelling to minimise error at these locations; at worst, modellers calculate ‘adjustment factors’ and apply these to modelled concentrations.
Meanwhile, the increasing availability of relatively low cost air pollution sensors that are easy to install and to maintain is allowing networks of such sensors to be installed across urban areas. Although these sensors have reduced reliability and accuracy compared with traditional monitors they allow much greater spatial coverage. A systematic method that integrates data from these low cost sensors with models could deliver real benefits in terms of understanding emissions and improving model estimates.
#17: From Kate:
I picked up an EMIT inventory from Mark Attree (maybe Chetan) from P:\FM\FM1085_Cambridge\EMIT\FM1034\Cambridge2013_20150713.MDB
This database was for the year 2013 and was made by Cambridge City Council, together with our help I believe.
I left the database as it was, other than changing the roads emission factors to be for 2016. The flows and route type were left as they were - the route type was a special one created specifically for Cambridge for 2013 - we thought this would be more accurate than the generic 2016 route type.
The exhaust emission factors used for 2016 were NAEI 2014 Urban for the year 2016.
My EMIT db is here:
P:\IP\IP155 Cambridge sensors\Working\EMIT\Cambridge2013_20150713.MDB
Other emission sources in the inventory include:
Guided buses
Car parks
Addenbrooks boilers, car parks, bus station and internal roads Park and ride Queues NAEI grid sources and point sources
#24: Including all sensor data results in excellent agreement at the ref monitors, particularly high correlation
Odd results above the y=2x line represent points where the monitored concentration is less than the background conc, so these data points could not be included in the inversion, i.e. Concentration at these receptors for these hours were not part of the inversion process, so did not constrain emissions adjustment
Very encouraging results from the AQMesh data only run: reduced bias and error, improved correlation and fraction within a factor of 2.
#25: Small change in the diurnal profile, particularly weekdays: note the increase in the morning rush hour and the decrease in the evening rush hour.
Very little difference between runs including all sensor data and runs only including AQMesh sensor data.
#27: The sources that change most when reference sensor data are included are those right next to the reference monitors, as you might expect.
#28: These graphs show variation in observed and modelled concentration through the day on one day only: 5th July 2016.
The graphs show that for some sensors, e.g. Regent Street and Montague Rd, if the ref sensors are included in the inversion then the modelled conc can be made to fit the observed conc. At these sources the modelled conc is dominated by only one source. For the receptors where the inversion has a harder job making the modelled conc fit the observed conc (e.g. Newmarket Rd) it is because many sources impact on the receptor.