Conversation
| #include <math.h> | ||
| //sigmoid | ||
| double sigmoid(double x) { | ||
| return 1 / (1 + exp(-x)); |
There was a problem hiding this comment.
use std::expl (or std::exp) instead of exp everywhere.
| } | ||
| //tan inverse | ||
| double arctan(double x) { | ||
| return atan(x); |
There was a problem hiding this comment.
Use std::atan(...) instead of atan(...)
docs/methods/activation_functions.md
Outdated
|
|
||
| ## Example | ||
|
|
||
| ``` |
There was a problem hiding this comment.
| ``` | |
| ```cpp |
|
|
||
| | Name | Definition | Return value | | ||
| |----------------------------------------|-----------------------------------------------|---------------| | ||
| | `oneHotEncoder(vector<T> data, nClasses)` | To encode the data into numerical values. | `vector<int>` | |
|
|
||
| ## Example | ||
|
|
||
| ``` |
There was a problem hiding this comment.
| ``` | |
| ```cpp |
|
|
||
|
|
||
|
|
||
|
|
There was a problem hiding this comment.
Remove these empty lines.
| template<class T> | ||
|
|
||
| double sigmoid(double x); | ||
| double tanh(double x); | ||
| double ReLU(double x); | ||
| double leakyReLU(double x, double alpha); | ||
| std::vector<double> softmax(const std::vector<double> &x); | ||
| double arctan(double x); | ||
| double binaryStep(double x); |
There was a problem hiding this comment.
Add doc comments above each function..
Example for sigmoid
/**
* @brief To calculate sigmoid(x)
* @param x: Number whose sigmoid value is to be calculated
* @return a double value representing sigmoid(x)
*/
uttammittal02
left a comment
There was a problem hiding this comment.
Only a few changes required and then we're good to go
docs/methods/activation_functions.md
Outdated
|
|
||
| tanh- The Tanh activation function is a hyperbolic tangent sigmoid function that has a range of -1 to 1. It is often used in deep learning models for its ability to model nonlinear boundaries | ||
|
|
||
| tan-1h-The inverse of tanh.The ArcTan function is a sigmoid function to model accelerating and decelerating outputs but with useful output ranges. |
There was a problem hiding this comment.
Arctan is not the inverse of tanh but the normal trignometric tan(x) function
|
|
||
| | Name | Definition | Type | | ||
| |--------------|--------------------------------------------|--------------| | ||
| | data | The data that has to be encoded is passed as the data parameter in the oneHotEncoder function. | `vector<int>` | |
There was a problem hiding this comment.
Data would be vector<string> and not vector<int>
|
|
||
| | Name | Definition | Return value | | ||
| |----------------------------------------|-----------------------------------------------|---------------| | ||
| | `oneHotEncoder(vector<T> data, nClasses)` | To encode the data into numerical values. | `vector<int>` | |
There was a problem hiding this comment.
Return value of this method is vector<vector<int>>
| #include "activation_functions.hpp" | ||
| template<class T> | ||
| // sigmoid | ||
| double sigmoid(double x) |
There was a problem hiding this comment.
You need to write these functions for a vector and not a single value
There was a problem hiding this comment.
You need to write these functions for a vector and not a single value
I'm changing all the functions
There was a problem hiding this comment.
You need to write these functions for a vector and not a single value
all changes updated
| } | ||
| <<<<<<< HEAD | ||
| // leaky ReLU | ||
| double leakyReLU(double x, double alpha) |
There was a problem hiding this comment.
Set default value of alpha to 0.1
| } else { | ||
| return alpha * x; | ||
| } | ||
| } |
There was a problem hiding this comment.
Add a function to convert binary to bipolar and vice-versa
| <<<<<<< HEAD | ||
| * @param x {double x} - double value on which the function is applied. | ||
| * | ||
| * @param x {vector<double>} - vector containing 'double' values of x for | ||
| * softmax activation function implementation. | ||
| * | ||
| * @return {double value} - double value after putting x in the functions gets | ||
| * returned. | ||
| ======= |
There was a problem hiding this comment.
| <<<<<<< HEAD | |
| * @param x {double x} - double value on which the function is applied. | |
| * | |
| * @param x {vector<double>} - vector containing 'double' values of x for | |
| * softmax activation function implementation. | |
| * | |
| * @return {double value} - double value after putting x in the functions gets | |
| * returned. | |
| ======= |
There was a problem hiding this comment.
@Ishwarendra sir the changes which you've asked to make, these are not visible in my vs code, those unwanted lines of code are already not there.
There was a problem hiding this comment.
i formatted all the source files using gitbash, since the clang error was there, but after commiting and pushing those changes, both the checks are coming wrong, how do i fix it?
| ======= | ||
| //leaky ReLU | ||
| double leakyReLU(double x, double alpha) { | ||
| if (x >= 0) { | ||
| return x; | ||
| } else { | ||
| return alpha * x; | ||
| } | ||
| } | ||
| >>>>>>> 5eebc29054fab6686e728aca29e64e1c53dd7a8c |
There was a problem hiding this comment.
| ======= | |
| //leaky ReLU | |
| double leakyReLU(double x, double alpha) { | |
| if (x >= 0) { | |
| return x; | |
| } else { | |
| return alpha * x; | |
| } | |
| } | |
| >>>>>>> 5eebc29054fab6686e728aca29e64e1c53dd7a8c |
Co-authored-by: Ishwarendra Jha <75680424+Ishwarendra@users.noreply.github.com>
Co-authored-by: Ishwarendra Jha <75680424+Ishwarendra@users.noreply.github.com>
Co-authored-by: Ishwarendra Jha <75680424+Ishwarendra@users.noreply.github.com>
|
@NandiniGera Run formatter and commit again. |
all changes updated |
|
really sorry for all the silly mistakes:/ |
Arey np with that...you guys are in the learning stage rn so obv you'll commit mistakes in the beginning...just try not to repeat a mistake |
| * Implementation of activation functions | ||
| */ | ||
| #include "activation_functions.hpp" | ||
| template<class T> |
There was a problem hiding this comment.
Remove this line, we do not require templates for these functions
| template<class T> |
| std::vector<double> leakyReLU(const std::vector<double> &x) | ||
| { | ||
| std::vector<double> y(x.size()); | ||
| double alpha = 0.1; |
There was a problem hiding this comment.
Add alpha as a parameter instead with default value 0.1, do the change in hpp file as well
| std::vector<double> leakyReLU(const std::vector<double> &x) | |
| { | |
| std::vector<double> y(x.size()); | |
| double alpha = 0.1; | |
| std::vector<double> leakyReLU(const std::vector<double> &x, double alpha = 0.1) | |
| { | |
| std::vector<double> y(x.size()); |
There was a problem hiding this comment.
Add
alphaas a parameter instead with default value 0.1, do the change in hpp file as well
done
| std::vector<double> y(x.size()); | ||
| for (int i = 0; i < x.size(); i++) | ||
| { | ||
| y[i] = 1 / (1 + exp(-x[i])); |
There was a problem hiding this comment.
Use std::exp instead of exp
uttammittal02
left a comment
There was a problem hiding this comment.
A few more changes
| #include "activation_functions.hpp" | ||
| template<class T> | ||
| // sigmoid | ||
| std::vector<double> sigmoid(const std::vector<double> &x) |
There was a problem hiding this comment.
Instead of initialising a new vector and returning it....change the return type to void and do the changes in vector x itself wherever possible
There was a problem hiding this comment.
Instead of initialising a new vector and returning it....change the return type to
voidand do the changes in vectorxitself wherever possible
done
| std::vector<double> y(x.size()); | ||
| for (int i = 0; i < x.size(); i++) | ||
| { | ||
| y[i] = atan(x[i]); |
There was a problem hiding this comment.
Use std::atan instead of atan as told earlier
|
You don't need to mention what the function returns in the doc comments if the function returns void, rest lgtm |
okayy done |
|
I'm unable to resolve the build issue, how do i fix it? |
We're working on it...will let you know soon |
Resolved issue #92 and #102 (Doc added)
Doc added of OneHotEncoder