links to confluence in all readmes
parent
a40b0b0f11
commit
11369333db
|
|
@ -1,67 +1,4 @@
|
|||
### Process Group Detection Rules and Naming
|
||||
|
||||
#### Detection Rule or Naming?
|
||||
### How to configure process groups?
|
||||
|
||||
For the explanation, we're using a real example of the Infotainment application:
|
||||
|
||||
!(PGNaming1)[../../../../img/PGNaming1.PNG]
|
||||
|
||||
Before working with your dashboards and alerting profiles, an important task to do when working with Dynatrace is checking
|
||||
the structure of your applications (process groups). You can do that clicking under *technologies* and filter using your
|
||||
application Management Zone.
|
||||
|
||||
In the picture above, there are two Process Groups called bon-information-prod. **If you see duplicated process groups like in**
|
||||
**this case, you MUST follow this guideline**
|
||||
|
||||
Next step would be to open both process groups and compare the metadata. In that way, you can identify if all process instances are
|
||||
part of the same application or not. An easy way to do that is asking yourself: how many instances of my application do i have?
|
||||
|
||||
If you have 4 instances in total and you're able to see 2 in one PG and other 2 in other PG it means that **they are part of the **
|
||||
**same application**
|
||||
|
||||
Another situation could be that checking on the metadata, then you see that are **two different application** and Dynatrace is just naming
|
||||
the process group in the same way
|
||||
|
||||
*Same application*
|
||||
- Problem: Dynatrace is creating two different process groups, what transalates in two separated services for the same application. Instead of
|
||||
seeing all the traffic in one service, you will have it splitted and it will complicate your monitoring
|
||||
- Solution: create a process group detection rule. Contact Dynatrace Expert
|
||||
|
||||
*Different application*
|
||||
- Problem: Dynatrace is just naming in the same way applications that are different.
|
||||
- Solution: This case is less severe, since it can be fixed with a process group naming rule.
|
||||
|
||||
|
||||
What about our example?
|
||||
!(PGNaming2)[../../../../img/PGNaming2.PNG]
|
||||
!(PGNaming3)[../../../../img/PGNaming3.PNG]
|
||||
|
||||
Based on the feedback of the infotaiment team, each process group is a different application (microservice) and it's visible in the kubernetes container/workload
|
||||
within the metadata of each Process Group.
|
||||
|
||||
#### How to create a Process Group Detection Rule
|
||||
1. Open the *conditional-naming-processgroup.yaml* file and create a rule that looks like this:
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
The result of the rule will be renaming the PG to this:
|
||||
```
|
||||
bon-information-prod ipa
|
||||
bon-information-prod rsl
|
||||
```
|
||||
|
||||
Other possible placeholders that you can use are for example:
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName/[^\\-]*$}
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesFullPodName/buffet-(.*?)-}
|
||||
{ProcessGroup:DetectedName} - {HostGroup:Name/[^\\_]*$}
|
||||
{ProcessGroup:KubernetesNamespace}
|
||||
{ProcessGroup:CommandLineArgs/.*?\\-f\\s\\/www\\/(.*?)\\/generated\\/httpd\\.conf.*?}
|
||||
|
||||
You can combine different ones. Check the (documentation)[link] for more
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Process+Group+Naming) page to configure your process groups.
|
||||
|
|
@ -1,35 +1,4 @@
|
|||
### Service Naming Rules
|
||||
|
||||
A typical case could be that you access to *Transaction & Services* and you find two services that are exactly the same:
|
||||
*DataDownloadV1*
|
||||
*DataDownloadV1*
|
||||
### How to configure service naming
|
||||
|
||||
If you drilldown into the service and you check in the process group, you may have a PROD and a E2E for each service.
|
||||
|
||||
*Note: if you see that both process group are exactly the same, please contact a Dynatrace expert to create a Process*
|
||||
*Group detection rule*
|
||||
|
||||
In the case the PG are PROD and E2E, then we need to create a rule that looks like this:
|
||||
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {Service:DetectedName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
|
||||
The rule will get the Service Detected Name (current name) and it will extract (with a regex) the part of the kubernetes namespace after the "-", so -prod or -e2e, resulting in:
|
||||
*DataDownloadV1 - prod*
|
||||
*DataDownloadV1 - e2e*
|
||||
|
||||
Now, services will be easy to identify.
|
||||
|
||||
You can create rules based on any property/metadata. Some other placeholder's eamples:
|
||||
{Service:DatabaseName} - E2E
|
||||
{Service:WebServiceName} - {ProcessGroup:Kubernetes:microservice} - {ProcessGroup:Kubernetes:environment}
|
||||
{Service:DetectedName} - {ProcessGroup:KubernetesContainerName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
{Service:DetectedName} - {ProcessGroup:SpringBootProfileName/[^\\-]*$}
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Service+Naming) page to configure your service naming.
|
||||
|
|
@ -1,13 +1,4 @@
|
|||
## Update dashboard configuration
|
||||
|
||||
- Configuration changes (like in dashboards, alerting profiles) must be done via a pull request. Changing a dashboard just in the environment, will cause that it will be overwritten by Monaco.
|
||||
- How to generate changes in your dashboards?
|
||||
1. Modify the dashboard within the Dynatrace UI with the intended changes.
|
||||
2. Copy the JSON of the dashboards. (Can be found under the dashboard settings)
|
||||
3. Paste the copied JSON under the Monaco JSON, overwrite it.
|
||||
4. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
### How to configure dashboards?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Dashboards) page to configure your dashboards.
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
|
||||
### How to configure management zones?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Management+Zones) page to configure your management zones.
|
||||
|
|
@ -1,60 +1,4 @@
|
|||
|
||||
## Configure Notification System
|
||||
### How to configure notification systems?
|
||||
|
||||
### MS Teams - Default
|
||||
|
||||
*Let's suppose you would like to start receiving alerts from Dynatrace via MS Teams just for your *EMEA PROD*.*
|
||||
|
||||
1. Open *notification.yaml* under your application configuration folder. By default, all notification systems are configured via MS Teams with an
|
||||
https://empty webhook (not configured).
|
||||
2. Create an incoming webhook in MS Teams. [How to?](https://www.dynatrace.com/support/help/shortlink/set-up-msteams-integration#configuration-in-microsoft-teams)
|
||||
3. Add the incoming webhook under the webhook parameter for the `<app_name>-PROD.EMEA-Prod`:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: <Add webhook here>
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: If you want to enable MS Teams for any other hub/stage, follow the same steps but make sure you're under the right configuration:
|
||||
`<app_name>-<stage>.<dynatrace-env>-<stage>:`
|
||||
|
||||
### Email
|
||||
|
||||
*The team prefers to be alerted via email, not MS Teams*
|
||||
|
||||
1. Keep the MS Teams integration disabled, with the https://empty webhook:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: https://empty
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
2. Create a new configuration template under config, using the email template:
|
||||
```
|
||||
config:
|
||||
- CD<app_name>email: email.json
|
||||
```
|
||||
3. Describe the configuration below, using the following template:
|
||||
```
|
||||
CD<app_name>email.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- receivers: distributionEmailexample@bmw.de`
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
|
||||
### ITSM
|
||||
Coming soon!
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Problem+Notification+Integrations) page to configure your notification systems.
|
||||
|
|
@ -1,67 +1,4 @@
|
|||
### Process Group Detection Rules and Naming
|
||||
|
||||
#### Detection Rule or Naming?
|
||||
### How to configure process groups?
|
||||
|
||||
For the explanation, we're using a real example of the Infotainment application:
|
||||
|
||||
!(PGNaming1)[../../../../img/PGNaming1.PNG]
|
||||
|
||||
Before working with your dashboards and alerting profiles, an important task to do when working with Dynatrace is checking
|
||||
the structure of your applications (process groups). You can do that clicking under *technologies* and filter using your
|
||||
application Management Zone.
|
||||
|
||||
In the picture above, there are two Process Groups called bon-information-prod. **If you see duplicated process groups like in**
|
||||
**this case, you MUST follow this guideline**
|
||||
|
||||
Next step would be to open both process groups and compare the metadata. In that way, you can identify if all process instances are
|
||||
part of the same application or not. An easy way to do that is asking yourself: how many instances of my application do i have?
|
||||
|
||||
If you have 4 instances in total and you're able to see 2 in one PG and other 2 in other PG it means that **they are part of the **
|
||||
**same application**
|
||||
|
||||
Another situation could be that checking on the metadata, then you see that are **two different application** and Dynatrace is just naming
|
||||
the process group in the same way
|
||||
|
||||
*Same application*
|
||||
- Problem: Dynatrace is creating two different process groups, what transalates in two separated services for the same application. Instead of
|
||||
seeing all the traffic in one service, you will have it splitted and it will complicate your monitoring
|
||||
- Solution: create a process group detection rule. Contact Dynatrace Expert
|
||||
|
||||
*Different application*
|
||||
- Problem: Dynatrace is just naming in the same way applications that are different.
|
||||
- Solution: This case is less severe, since it can be fixed with a process group naming rule.
|
||||
|
||||
|
||||
What about our example?
|
||||
!(PGNaming2)[../../../../img/PGNaming2.PNG]
|
||||
!(PGNaming3)[../../../../img/PGNaming3.PNG]
|
||||
|
||||
Based on the feedback of the infotaiment team, each process group is a different application (microservice) and it's visible in the kubernetes container/workload
|
||||
within the metadata of each Process Group.
|
||||
|
||||
#### How to create a Process Group Detection Rule
|
||||
1. Open the *conditional-naming-processgroup.yaml* file and create a rule that looks like this:
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
The result of the rule will be renaming the PG to this:
|
||||
```
|
||||
bon-information-prod ipa
|
||||
bon-information-prod rsl
|
||||
```
|
||||
|
||||
Other possible placeholders that you can use are for example:
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName/[^\\-]*$}
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesFullPodName/buffet-(.*?)-}
|
||||
{ProcessGroup:DetectedName} - {HostGroup:Name/[^\\_]*$}
|
||||
{ProcessGroup:KubernetesNamespace}
|
||||
{ProcessGroup:CommandLineArgs/.*?\\-f\\s\\/www\\/(.*?)\\/generated\\/httpd\\.conf.*?}
|
||||
|
||||
You can combine different ones. Check the (documentation)[link] for more
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Process+Group+Naming) page to configure your process groups.
|
||||
|
|
@ -1,35 +1,4 @@
|
|||
### Service Naming Rules
|
||||
|
||||
A typical case could be that you access to *Transaction & Services* and you find two services that are exactly the same:
|
||||
*DataDownloadV1*
|
||||
*DataDownloadV1*
|
||||
### How to configure service naming
|
||||
|
||||
If you drilldown into the service and you check in the process group, you may have a PROD and a E2E for each service.
|
||||
|
||||
*Note: if you see that both process group are exactly the same, please contact a Dynatrace expert to create a Process*
|
||||
*Group detection rule*
|
||||
|
||||
In the case the PG are PROD and E2E, then we need to create a rule that looks like this:
|
||||
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {Service:DetectedName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
|
||||
The rule will get the Service Detected Name (current name) and it will extract (with a regex) the part of the kubernetes namespace after the "-", so -prod or -e2e, resulting in:
|
||||
*DataDownloadV1 - prod*
|
||||
*DataDownloadV1 - e2e*
|
||||
|
||||
Now, services will be easy to identify.
|
||||
|
||||
You can create rules based on any property/metadata. Some other placeholder's eamples:
|
||||
{Service:DatabaseName} - E2E
|
||||
{Service:WebServiceName} - {ProcessGroup:Kubernetes:microservice} - {ProcessGroup:Kubernetes:environment}
|
||||
{Service:DetectedName} - {ProcessGroup:KubernetesContainerName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
{Service:DetectedName} - {ProcessGroup:SpringBootProfileName/[^\\-]*$}
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Service+Naming) page to configure your service naming.
|
||||
|
|
@ -1,13 +1,4 @@
|
|||
## Update dashboard configuration
|
||||
|
||||
- Configuration changes (like in dashboards, alerting profiles) must be done via a pull request. Changing a dashboard just in the environment, will cause that it will be overwritten by Monaco.
|
||||
- How to generate changes in your dashboards?
|
||||
1. Modify the dashboard within the Dynatrace UI with the intended changes.
|
||||
2. Copy the JSON of the dashboards. (Can be found under the dashboard settings)
|
||||
3. Paste the copied JSON under the Monaco JSON, overwrite it.
|
||||
4. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
### How to configure dashboards?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Dashboards) page to configure your dashboards.
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
|
||||
### How to configure management zones?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Management+Zones) page to configure your management zones.
|
||||
|
|
@ -1,60 +1,4 @@
|
|||
|
||||
## Configure Notification System
|
||||
### How to configure notification systems?
|
||||
|
||||
### MS Teams - Default
|
||||
|
||||
*Let's suppose you would like to start receiving alerts from Dynatrace via MS Teams just for your *EMEA PROD*.*
|
||||
|
||||
1. Open *notification.yaml* under your application configuration folder. By default, all notification systems are configured via MS Teams with an
|
||||
https://empty webhook (not configured).
|
||||
2. Create an incoming webhook in MS Teams. [How to?](https://www.dynatrace.com/support/help/shortlink/set-up-msteams-integration#configuration-in-microsoft-teams)
|
||||
3. Add the incoming webhook under the webhook parameter for the `<app_name>-PROD.EMEA-Prod`:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: <Add webhook here>
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: If you want to enable MS Teams for any other hub/stage, follow the same steps but make sure you're under the right configuration:
|
||||
`<app_name>-<stage>.<dynatrace-env>-<stage>:`
|
||||
|
||||
### Email
|
||||
|
||||
*The team prefers to be alerted via email, not MS Teams*
|
||||
|
||||
1. Keep the MS Teams integration disabled, with the https://empty webhook:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: https://empty
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
2. Create a new configuration template under config, using the email template:
|
||||
```
|
||||
config:
|
||||
- CD<app_name>email: email.json
|
||||
```
|
||||
3. Describe the configuration below, using the following template:
|
||||
```
|
||||
CD<app_name>email.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- receivers: distributionEmailexample@bmw.de`
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
|
||||
### ITSM
|
||||
Coming soon!
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Problem+Notification+Integrations) page to configure your notification systems.
|
||||
|
|
@ -1,67 +1,4 @@
|
|||
### Process Group Detection Rules and Naming
|
||||
|
||||
#### Detection Rule or Naming?
|
||||
### How to configure process groups?
|
||||
|
||||
For the explanation, we're using a real example of the Infotainment application:
|
||||
|
||||
!(PGNaming1)[../../../../img/PGNaming1.PNG]
|
||||
|
||||
Before working with your dashboards and alerting profiles, an important task to do when working with Dynatrace is checking
|
||||
the structure of your applications (process groups). You can do that clicking under *technologies* and filter using your
|
||||
application Management Zone.
|
||||
|
||||
In the picture above, there are two Process Groups called bon-information-prod. **If you see duplicated process groups like in**
|
||||
**this case, you MUST follow this guideline**
|
||||
|
||||
Next step would be to open both process groups and compare the metadata. In that way, you can identify if all process instances are
|
||||
part of the same application or not. An easy way to do that is asking yourself: how many instances of my application do i have?
|
||||
|
||||
If you have 4 instances in total and you're able to see 2 in one PG and other 2 in other PG it means that **they are part of the **
|
||||
**same application**
|
||||
|
||||
Another situation could be that checking on the metadata, then you see that are **two different application** and Dynatrace is just naming
|
||||
the process group in the same way
|
||||
|
||||
*Same application*
|
||||
- Problem: Dynatrace is creating two different process groups, what transalates in two separated services for the same application. Instead of
|
||||
seeing all the traffic in one service, you will have it splitted and it will complicate your monitoring
|
||||
- Solution: create a process group detection rule. Contact Dynatrace Expert
|
||||
|
||||
*Different application*
|
||||
- Problem: Dynatrace is just naming in the same way applications that are different.
|
||||
- Solution: This case is less severe, since it can be fixed with a process group naming rule.
|
||||
|
||||
|
||||
What about our example?
|
||||
!(PGNaming2)[../../../../img/PGNaming2.PNG]
|
||||
!(PGNaming3)[../../../../img/PGNaming3.PNG]
|
||||
|
||||
Based on the feedback of the infotaiment team, each process group is a different application (microservice) and it's visible in the kubernetes container/workload
|
||||
within the metadata of each Process Group.
|
||||
|
||||
#### How to create a Process Group Detection Rule
|
||||
1. Open the *conditional-naming-processgroup.yaml* file and create a rule that looks like this:
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
The result of the rule will be renaming the PG to this:
|
||||
```
|
||||
bon-information-prod ipa
|
||||
bon-information-prod rsl
|
||||
```
|
||||
|
||||
Other possible placeholders that you can use are for example:
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName/[^\\-]*$}
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesFullPodName/buffet-(.*?)-}
|
||||
{ProcessGroup:DetectedName} - {HostGroup:Name/[^\\_]*$}
|
||||
{ProcessGroup:KubernetesNamespace}
|
||||
{ProcessGroup:CommandLineArgs/.*?\\-f\\s\\/www\\/(.*?)\\/generated\\/httpd\\.conf.*?}
|
||||
|
||||
You can combine different ones. Check the (documentation)[link] for more
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Process+Group+Naming) page to configure your process groups.
|
||||
|
|
@ -1,35 +1,4 @@
|
|||
### Service Naming Rules
|
||||
|
||||
A typical case could be that you access to *Transaction & Services* and you find two services that are exactly the same:
|
||||
*DataDownloadV1*
|
||||
*DataDownloadV1*
|
||||
### How to configure service naming
|
||||
|
||||
If you drilldown into the service and you check in the process group, you may have a PROD and a E2E for each service.
|
||||
|
||||
*Note: if you see that both process group are exactly the same, please contact a Dynatrace expert to create a Process*
|
||||
*Group detection rule*
|
||||
|
||||
In the case the PG are PROD and E2E, then we need to create a rule that looks like this:
|
||||
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {Service:DetectedName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
|
||||
The rule will get the Service Detected Name (current name) and it will extract (with a regex) the part of the kubernetes namespace after the "-", so -prod or -e2e, resulting in:
|
||||
*DataDownloadV1 - prod*
|
||||
*DataDownloadV1 - e2e*
|
||||
|
||||
Now, services will be easy to identify.
|
||||
|
||||
You can create rules based on any property/metadata. Some other placeholder's eamples:
|
||||
{Service:DatabaseName} - E2E
|
||||
{Service:WebServiceName} - {ProcessGroup:Kubernetes:microservice} - {ProcessGroup:Kubernetes:environment}
|
||||
{Service:DetectedName} - {ProcessGroup:KubernetesContainerName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
{Service:DetectedName} - {ProcessGroup:SpringBootProfileName/[^\\-]*$}
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Service+Naming) page to configure your service naming.
|
||||
|
|
@ -1,13 +1,4 @@
|
|||
## Update dashboard configuration
|
||||
|
||||
- Configuration changes (like in dashboards, alerting profiles) must be done via a pull request. Changing a dashboard just in the environment, will cause that it will be overwritten by Monaco.
|
||||
- How to generate changes in your dashboards?
|
||||
1. Modify the dashboard within the Dynatrace UI with the intended changes.
|
||||
2. Copy the JSON of the dashboards. (Can be found under the dashboard settings)
|
||||
3. Paste the copied JSON under the Monaco JSON, overwrite it.
|
||||
4. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
### How to configure dashboards?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Dashboards) page to configure your dashboards.
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
|
||||
### How to configure management zones?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Management+Zones) page to configure your management zones.
|
||||
|
|
@ -1,60 +1,4 @@
|
|||
|
||||
## Configure Notification System
|
||||
### How to configure notification systems?
|
||||
|
||||
### MS Teams - Default
|
||||
|
||||
*Let's suppose you would like to start receiving alerts from Dynatrace via MS Teams just for your *EMEA PROD*.*
|
||||
|
||||
1. Open *notification.yaml* under your application configuration folder. By default, all notification systems are configured via MS Teams with an
|
||||
https://empty webhook (not configured).
|
||||
2. Create an incoming webhook in MS Teams. [How to?](https://www.dynatrace.com/support/help/shortlink/set-up-msteams-integration#configuration-in-microsoft-teams)
|
||||
3. Add the incoming webhook under the webhook parameter for the `<app_name>-PROD.EMEA-Prod`:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: <Add webhook here>
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: If you want to enable MS Teams for any other hub/stage, follow the same steps but make sure you're under the right configuration:
|
||||
`<app_name>-<stage>.<dynatrace-env>-<stage>:`
|
||||
|
||||
### Email
|
||||
|
||||
*The team prefers to be alerted via email, not MS Teams*
|
||||
|
||||
1. Keep the MS Teams integration disabled, with the https://empty webhook:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: https://empty
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
2. Create a new configuration template under config, using the email template:
|
||||
```
|
||||
config:
|
||||
- CD<app_name>email: email.json
|
||||
```
|
||||
3. Describe the configuration below, using the following template:
|
||||
```
|
||||
CD<app_name>email.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- receivers: distributionEmailexample@bmw.de`
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
|
||||
### ITSM
|
||||
Coming soon!
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Problem+Notification+Integrations) page to configure your notification systems.
|
||||
|
|
@ -1,67 +1,4 @@
|
|||
### Process Group Detection Rules and Naming
|
||||
|
||||
#### Detection Rule or Naming?
|
||||
### How to configure process groups?
|
||||
|
||||
For the explanation, we're using a real example of the Infotainment application:
|
||||
|
||||
!(PGNaming1)[../../../../img/PGNaming1.PNG]
|
||||
|
||||
Before working with your dashboards and alerting profiles, an important task to do when working with Dynatrace is checking
|
||||
the structure of your applications (process groups). You can do that clicking under *technologies* and filter using your
|
||||
application Management Zone.
|
||||
|
||||
In the picture above, there are two Process Groups called bon-information-prod. **If you see duplicated process groups like in**
|
||||
**this case, you MUST follow this guideline**
|
||||
|
||||
Next step would be to open both process groups and compare the metadata. In that way, you can identify if all process instances are
|
||||
part of the same application or not. An easy way to do that is asking yourself: how many instances of my application do i have?
|
||||
|
||||
If you have 4 instances in total and you're able to see 2 in one PG and other 2 in other PG it means that **they are part of the **
|
||||
**same application**
|
||||
|
||||
Another situation could be that checking on the metadata, then you see that are **two different application** and Dynatrace is just naming
|
||||
the process group in the same way
|
||||
|
||||
*Same application*
|
||||
- Problem: Dynatrace is creating two different process groups, what transalates in two separated services for the same application. Instead of
|
||||
seeing all the traffic in one service, you will have it splitted and it will complicate your monitoring
|
||||
- Solution: create a process group detection rule. Contact Dynatrace Expert
|
||||
|
||||
*Different application*
|
||||
- Problem: Dynatrace is just naming in the same way applications that are different.
|
||||
- Solution: This case is less severe, since it can be fixed with a process group naming rule.
|
||||
|
||||
|
||||
What about our example?
|
||||
!(PGNaming2)[../../../../img/PGNaming2.PNG]
|
||||
!(PGNaming3)[../../../../img/PGNaming3.PNG]
|
||||
|
||||
Based on the feedback of the infotaiment team, each process group is a different application (microservice) and it's visible in the kubernetes container/workload
|
||||
within the metadata of each Process Group.
|
||||
|
||||
#### How to create a Process Group Detection Rule
|
||||
1. Open the *conditional-naming-processgroup.yaml* file and create a rule that looks like this:
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
The result of the rule will be renaming the PG to this:
|
||||
```
|
||||
bon-information-prod ipa
|
||||
bon-information-prod rsl
|
||||
```
|
||||
|
||||
Other possible placeholders that you can use are for example:
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName/[^\\-]*$}
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesFullPodName/buffet-(.*?)-}
|
||||
{ProcessGroup:DetectedName} - {HostGroup:Name/[^\\_]*$}
|
||||
{ProcessGroup:KubernetesNamespace}
|
||||
{ProcessGroup:CommandLineArgs/.*?\\-f\\s\\/www\\/(.*?)\\/generated\\/httpd\\.conf.*?}
|
||||
|
||||
You can combine different ones. Check the (documentation)[link] for more
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Process+Group+Naming) page to configure your process groups.
|
||||
|
|
@ -1,35 +1,4 @@
|
|||
### Service Naming Rules
|
||||
|
||||
A typical case could be that you access to *Transaction & Services* and you find two services that are exactly the same:
|
||||
*DataDownloadV1*
|
||||
*DataDownloadV1*
|
||||
### How to configure service naming
|
||||
|
||||
If you drilldown into the service and you check in the process group, you may have a PROD and a E2E for each service.
|
||||
|
||||
*Note: if you see that both process group are exactly the same, please contact a Dynatrace expert to create a Process*
|
||||
*Group detection rule*
|
||||
|
||||
In the case the PG are PROD and E2E, then we need to create a rule that looks like this:
|
||||
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {Service:DetectedName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
|
||||
The rule will get the Service Detected Name (current name) and it will extract (with a regex) the part of the kubernetes namespace after the "-", so -prod or -e2e, resulting in:
|
||||
*DataDownloadV1 - prod*
|
||||
*DataDownloadV1 - e2e*
|
||||
|
||||
Now, services will be easy to identify.
|
||||
|
||||
You can create rules based on any property/metadata. Some other placeholder's eamples:
|
||||
{Service:DatabaseName} - E2E
|
||||
{Service:WebServiceName} - {ProcessGroup:Kubernetes:microservice} - {ProcessGroup:Kubernetes:environment}
|
||||
{Service:DetectedName} - {ProcessGroup:KubernetesContainerName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
{Service:DetectedName} - {ProcessGroup:SpringBootProfileName/[^\\-]*$}
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Service+Naming) page to configure your service naming.
|
||||
|
|
@ -1,13 +1,4 @@
|
|||
## Update dashboard configuration
|
||||
|
||||
- Configuration changes (like in dashboards, alerting profiles) must be done via a pull request. Changing a dashboard just in the environment, will cause that it will be overwritten by Monaco.
|
||||
- How to generate changes in your dashboards?
|
||||
1. Modify the dashboard within the Dynatrace UI with the intended changes.
|
||||
2. Copy the JSON of the dashboards. (Can be found under the dashboard settings)
|
||||
3. Paste the copied JSON under the Monaco JSON, overwrite it.
|
||||
4. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
### How to configure dashboards?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Dashboards) page to configure your dashboards.
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
|
||||
### How to configure management zones?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Management+Zones) page to configure your management zones.
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
|
||||
### How to configure notification systems?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Problem+Notification+Integrations) page to configure your notification systems.
|
||||
|
|
@ -1,67 +1,4 @@
|
|||
### Process Group Detection Rules and Naming
|
||||
|
||||
#### Detection Rule or Naming?
|
||||
### How to configure process groups?
|
||||
|
||||
For the explanation, we're using a real example of the Infotainment application:
|
||||
|
||||
!(PGNaming1)[../../../../img/PGNaming1.PNG]
|
||||
|
||||
Before working with your dashboards and alerting profiles, an important task to do when working with Dynatrace is checking
|
||||
the structure of your applications (process groups). You can do that clicking under *technologies* and filter using your
|
||||
application Management Zone.
|
||||
|
||||
In the picture above, there are two Process Groups called bon-information-prod. **If you see duplicated process groups like in**
|
||||
**this case, you MUST follow this guideline**
|
||||
|
||||
Next step would be to open both process groups and compare the metadata. In that way, you can identify if all process instances are
|
||||
part of the same application or not. An easy way to do that is asking yourself: how many instances of my application do i have?
|
||||
|
||||
If you have 4 instances in total and you're able to see 2 in one PG and other 2 in other PG it means that **they are part of the **
|
||||
**same application**
|
||||
|
||||
Another situation could be that checking on the metadata, then you see that are **two different application** and Dynatrace is just naming
|
||||
the process group in the same way
|
||||
|
||||
*Same application*
|
||||
- Problem: Dynatrace is creating two different process groups, what transalates in two separated services for the same application. Instead of
|
||||
seeing all the traffic in one service, you will have it splitted and it will complicate your monitoring
|
||||
- Solution: create a process group detection rule. Contact Dynatrace Expert
|
||||
|
||||
*Different application*
|
||||
- Problem: Dynatrace is just naming in the same way applications that are different.
|
||||
- Solution: This case is less severe, since it can be fixed with a process group naming rule.
|
||||
|
||||
|
||||
What about our example?
|
||||
!(PGNaming2)[../../../../img/PGNaming2.PNG]
|
||||
!(PGNaming3)[../../../../img/PGNaming3.PNG]
|
||||
|
||||
Based on the feedback of the infotaiment team, each process group is a different application (microservice) and it's visible in the kubernetes container/workload
|
||||
within the metadata of each Process Group.
|
||||
|
||||
#### How to create a Process Group Detection Rule
|
||||
1. Open the *conditional-naming-processgroup.yaml* file and create a rule that looks like this:
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
The result of the rule will be renaming the PG to this:
|
||||
```
|
||||
bon-information-prod ipa
|
||||
bon-information-prod rsl
|
||||
```
|
||||
|
||||
Other possible placeholders that you can use are for example:
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName/[^\\-]*$}
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesFullPodName/buffet-(.*?)-}
|
||||
{ProcessGroup:DetectedName} - {HostGroup:Name/[^\\_]*$}
|
||||
{ProcessGroup:KubernetesNamespace}
|
||||
{ProcessGroup:CommandLineArgs/.*?\\-f\\s\\/www\\/(.*?)\\/generated\\/httpd\\.conf.*?}
|
||||
|
||||
You can combine different ones. Check the (documentation)[link] for more
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Process+Group+Naming) page to configure your process groups.
|
||||
|
|
@ -1,35 +1,4 @@
|
|||
### Service Naming Rules
|
||||
|
||||
A typical case could be that you access to *Transaction & Services* and you find two services that are exactly the same:
|
||||
*DataDownloadV1*
|
||||
*DataDownloadV1*
|
||||
### How to configure service naming
|
||||
|
||||
If you drilldown into the service and you check in the process group, you may have a PROD and a E2E for each service.
|
||||
|
||||
*Note: if you see that both process group are exactly the same, please contact a Dynatrace expert to create a Process*
|
||||
*Group detection rule*
|
||||
|
||||
In the case the PG are PROD and E2E, then we need to create a rule that looks like this:
|
||||
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {Service:DetectedName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
|
||||
The rule will get the Service Detected Name (current name) and it will extract (with a regex) the part of the kubernetes namespace after the "-", so -prod or -e2e, resulting in:
|
||||
*DataDownloadV1 - prod*
|
||||
*DataDownloadV1 - e2e*
|
||||
|
||||
Now, services will be easy to identify.
|
||||
|
||||
You can create rules based on any property/metadata. Some other placeholder's eamples:
|
||||
{Service:DatabaseName} - E2E
|
||||
{Service:WebServiceName} - {ProcessGroup:Kubernetes:microservice} - {ProcessGroup:Kubernetes:environment}
|
||||
{Service:DetectedName} - {ProcessGroup:KubernetesContainerName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
{Service:DetectedName} - {ProcessGroup:SpringBootProfileName/[^\\-]*$}
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Service+Naming) page to configure your service naming.
|
||||
|
|
@ -1,13 +1,4 @@
|
|||
## Update dashboard configuration
|
||||
|
||||
- Configuration changes (like in dashboards, alerting profiles) must be done via a pull request. Changing a dashboard just in the environment, will cause that it will be overwritten by Monaco.
|
||||
- How to generate changes in your dashboards?
|
||||
1. Modify the dashboard within the Dynatrace UI with the intended changes.
|
||||
2. Copy the JSON of the dashboards. (Can be found under the dashboard settings)
|
||||
3. Paste the copied JSON under the Monaco JSON, overwrite it.
|
||||
4. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
### How to configure dashboards?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Dashboards) page to configure your dashboards.
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
|
||||
### How to configure management zones?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Management+Zones) page to configure your management zones.
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
|
||||
### How to configure notification systems?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Problem+Notification+Integrations) page to configure your notification systems.
|
||||
|
|
@ -1,67 +1,4 @@
|
|||
### Process Group Detection Rules and Naming
|
||||
|
||||
#### Detection Rule or Naming?
|
||||
### How to configure process groups?
|
||||
|
||||
For the explanation, we're using a real example of the Infotainment application:
|
||||
|
||||
!(PGNaming1)[../../../../img/PGNaming1.PNG]
|
||||
|
||||
Before working with your dashboards and alerting profiles, an important task to do when working with Dynatrace is checking
|
||||
the structure of your applications (process groups). You can do that clicking under *technologies* and filter using your
|
||||
application Management Zone.
|
||||
|
||||
In the picture above, there are two Process Groups called bon-information-prod. **If you see duplicated process groups like in**
|
||||
**this case, you MUST follow this guideline**
|
||||
|
||||
Next step would be to open both process groups and compare the metadata. In that way, you can identify if all process instances are
|
||||
part of the same application or not. An easy way to do that is asking yourself: how many instances of my application do i have?
|
||||
|
||||
If you have 4 instances in total and you're able to see 2 in one PG and other 2 in other PG it means that **they are part of the **
|
||||
**same application**
|
||||
|
||||
Another situation could be that checking on the metadata, then you see that are **two different application** and Dynatrace is just naming
|
||||
the process group in the same way
|
||||
|
||||
*Same application*
|
||||
- Problem: Dynatrace is creating two different process groups, what transalates in two separated services for the same application. Instead of
|
||||
seeing all the traffic in one service, you will have it splitted and it will complicate your monitoring
|
||||
- Solution: create a process group detection rule. Contact Dynatrace Expert
|
||||
|
||||
*Different application*
|
||||
- Problem: Dynatrace is just naming in the same way applications that are different.
|
||||
- Solution: This case is less severe, since it can be fixed with a process group naming rule.
|
||||
|
||||
|
||||
What about our example?
|
||||
!(PGNaming2)[../../../../img/PGNaming2.PNG]
|
||||
!(PGNaming3)[../../../../img/PGNaming3.PNG]
|
||||
|
||||
Based on the feedback of the infotaiment team, each process group is a different application (microservice) and it's visible in the kubernetes container/workload
|
||||
within the metadata of each Process Group.
|
||||
|
||||
#### How to create a Process Group Detection Rule
|
||||
1. Open the *conditional-naming-processgroup.yaml* file and create a rule that looks like this:
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
The result of the rule will be renaming the PG to this:
|
||||
```
|
||||
bon-information-prod ipa
|
||||
bon-information-prod rsl
|
||||
```
|
||||
|
||||
Other possible placeholders that you can use are for example:
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName/[^\\-]*$}
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesFullPodName/buffet-(.*?)-}
|
||||
{ProcessGroup:DetectedName} - {HostGroup:Name/[^\\_]*$}
|
||||
{ProcessGroup:KubernetesNamespace}
|
||||
{ProcessGroup:CommandLineArgs/.*?\\-f\\s\\/www\\/(.*?)\\/generated\\/httpd\\.conf.*?}
|
||||
|
||||
You can combine different ones. Check the (documentation)[link] for more
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Process+Group+Naming) page to configure your process groups.
|
||||
|
|
@ -1,35 +1,4 @@
|
|||
### Service Naming Rules
|
||||
|
||||
A typical case could be that you access to *Transaction & Services* and you find two services that are exactly the same:
|
||||
*DataDownloadV1*
|
||||
*DataDownloadV1*
|
||||
### How to configure service naming
|
||||
|
||||
If you drilldown into the service and you check in the process group, you may have a PROD and a E2E for each service.
|
||||
|
||||
*Note: if you see that both process group are exactly the same, please contact a Dynatrace expert to create a Process*
|
||||
*Group detection rule*
|
||||
|
||||
In the case the PG are PROD and E2E, then we need to create a rule that looks like this:
|
||||
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {Service:DetectedName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
|
||||
The rule will get the Service Detected Name (current name) and it will extract (with a regex) the part of the kubernetes namespace after the "-", so -prod or -e2e, resulting in:
|
||||
*DataDownloadV1 - prod*
|
||||
*DataDownloadV1 - e2e*
|
||||
|
||||
Now, services will be easy to identify.
|
||||
|
||||
You can create rules based on any property/metadata. Some other placeholder's eamples:
|
||||
{Service:DatabaseName} - E2E
|
||||
{Service:WebServiceName} - {ProcessGroup:Kubernetes:microservice} - {ProcessGroup:Kubernetes:environment}
|
||||
{Service:DetectedName} - {ProcessGroup:KubernetesContainerName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
{Service:DetectedName} - {ProcessGroup:SpringBootProfileName/[^\\-]*$}
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Service+Naming) page to configure your service naming.
|
||||
|
|
@ -1,13 +1,4 @@
|
|||
## Update dashboard configuration
|
||||
|
||||
- Configuration changes (like in dashboards, alerting profiles) must be done via a pull request. Changing a dashboard just in the environment, will cause that it will be overwritten by Monaco.
|
||||
- How to generate changes in your dashboards?
|
||||
1. Modify the dashboard within the Dynatrace UI with the intended changes.
|
||||
2. Copy the JSON of the dashboards. (Can be found under the dashboard settings)
|
||||
3. Paste the copied JSON under the Monaco JSON, overwrite it.
|
||||
4. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
### How to configure dashboards?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Dashboards) page to configure your dashboards.
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
|
||||
### How to configure management zones?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Management+Zones) page to configure your management zones.
|
||||
|
|
@ -1,60 +1,4 @@
|
|||
|
||||
## Configure Notification System
|
||||
### How to configure notification systems?
|
||||
|
||||
### MS Teams - Default
|
||||
|
||||
*Let's suppose you would like to start receiving alerts from Dynatrace via MS Teams just for your *EMEA PROD*.*
|
||||
|
||||
1. Open *notification.yaml* under your application configuration folder. By default, all notification systems are configured via MS Teams with an
|
||||
https://empty webhook (not configured).
|
||||
2. Create an incoming webhook in MS Teams. [How to?](https://www.dynatrace.com/support/help/shortlink/set-up-msteams-integration#configuration-in-microsoft-teams)
|
||||
3. Add the incoming webhook under the webhook parameter for the `<app_name>-PROD.EMEA-Prod`:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: <Add webhook here>
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: If you want to enable MS Teams for any other hub/stage, follow the same steps but make sure you're under the right configuration:
|
||||
`<app_name>-<stage>.<dynatrace-env>-<stage>:`
|
||||
|
||||
### Email
|
||||
|
||||
*The team prefers to be alerted via email, not MS Teams*
|
||||
|
||||
1. Keep the MS Teams integration disabled, with the https://empty webhook:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: https://empty
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
2. Create a new configuration template under config, using the email template:
|
||||
```
|
||||
config:
|
||||
- CD<app_name>email: email.json
|
||||
```
|
||||
3. Describe the configuration below, using the following template:
|
||||
```
|
||||
CD<app_name>email.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- receivers: distributionEmailexample@bmw.de`
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
|
||||
### ITSM
|
||||
Coming soon!
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Problem+Notification+Integrations) page to configure your notification systems.
|
||||
|
|
@ -1,67 +1,4 @@
|
|||
### Process Group Detection Rules and Naming
|
||||
|
||||
#### Detection Rule or Naming?
|
||||
### How to configure process groups?
|
||||
|
||||
For the explanation, we're using a real example of the Infotainment application:
|
||||
|
||||
!(PGNaming1)[../../../../img/PGNaming1.PNG]
|
||||
|
||||
Before working with your dashboards and alerting profiles, an important task to do when working with Dynatrace is checking
|
||||
the structure of your applications (process groups). You can do that clicking under *technologies* and filter using your
|
||||
application Management Zone.
|
||||
|
||||
In the picture above, there are two Process Groups called bon-information-prod. **If you see duplicated process groups like in**
|
||||
**this case, you MUST follow this guideline**
|
||||
|
||||
Next step would be to open both process groups and compare the metadata. In that way, you can identify if all process instances are
|
||||
part of the same application or not. An easy way to do that is asking yourself: how many instances of my application do i have?
|
||||
|
||||
If you have 4 instances in total and you're able to see 2 in one PG and other 2 in other PG it means that **they are part of the **
|
||||
**same application**
|
||||
|
||||
Another situation could be that checking on the metadata, then you see that are **two different application** and Dynatrace is just naming
|
||||
the process group in the same way
|
||||
|
||||
*Same application*
|
||||
- Problem: Dynatrace is creating two different process groups, what transalates in two separated services for the same application. Instead of
|
||||
seeing all the traffic in one service, you will have it splitted and it will complicate your monitoring
|
||||
- Solution: create a process group detection rule. Contact Dynatrace Expert
|
||||
|
||||
*Different application*
|
||||
- Problem: Dynatrace is just naming in the same way applications that are different.
|
||||
- Solution: This case is less severe, since it can be fixed with a process group naming rule.
|
||||
|
||||
|
||||
What about our example?
|
||||
!(PGNaming2)[../../../../img/PGNaming2.PNG]
|
||||
!(PGNaming3)[../../../../img/PGNaming3.PNG]
|
||||
|
||||
Based on the feedback of the infotaiment team, each process group is a different application (microservice) and it's visible in the kubernetes container/workload
|
||||
within the metadata of each Process Group.
|
||||
|
||||
#### How to create a Process Group Detection Rule
|
||||
1. Open the *conditional-naming-processgroup.yaml* file and create a rule that looks like this:
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
The result of the rule will be renaming the PG to this:
|
||||
```
|
||||
bon-information-prod ipa
|
||||
bon-information-prod rsl
|
||||
```
|
||||
|
||||
Other possible placeholders that you can use are for example:
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName/[^\\-]*$}
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesFullPodName/buffet-(.*?)-}
|
||||
{ProcessGroup:DetectedName} - {HostGroup:Name/[^\\_]*$}
|
||||
{ProcessGroup:KubernetesNamespace}
|
||||
{ProcessGroup:CommandLineArgs/.*?\\-f\\s\\/www\\/(.*?)\\/generated\\/httpd\\.conf.*?}
|
||||
|
||||
You can combine different ones. Check the (documentation)[link] for more
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Process+Group+Naming) page to configure your process groups.
|
||||
|
|
@ -1,35 +1,4 @@
|
|||
### Service Naming Rules
|
||||
|
||||
A typical case could be that you access to *Transaction & Services* and you find two services that are exactly the same:
|
||||
*DataDownloadV1*
|
||||
*DataDownloadV1*
|
||||
### How to configure service naming
|
||||
|
||||
If you drilldown into the service and you check in the process group, you may have a PROD and a E2E for each service.
|
||||
|
||||
*Note: if you see that both process group are exactly the same, please contact a Dynatrace expert to create a Process*
|
||||
*Group detection rule*
|
||||
|
||||
In the case the PG are PROD and E2E, then we need to create a rule that looks like this:
|
||||
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {Service:DetectedName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
|
||||
The rule will get the Service Detected Name (current name) and it will extract (with a regex) the part of the kubernetes namespace after the "-", so -prod or -e2e, resulting in:
|
||||
*DataDownloadV1 - prod*
|
||||
*DataDownloadV1 - e2e*
|
||||
|
||||
Now, services will be easy to identify.
|
||||
|
||||
You can create rules based on any property/metadata. Some other placeholder's eamples:
|
||||
{Service:DatabaseName} - E2E
|
||||
{Service:WebServiceName} - {ProcessGroup:Kubernetes:microservice} - {ProcessGroup:Kubernetes:environment}
|
||||
{Service:DetectedName} - {ProcessGroup:KubernetesContainerName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
{Service:DetectedName} - {ProcessGroup:SpringBootProfileName/[^\\-]*$}
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Service+Naming) page to configure your service naming.
|
||||
|
|
@ -1,13 +1,4 @@
|
|||
## Update dashboard configuration
|
||||
|
||||
- Configuration changes (like in dashboards, alerting profiles) must be done via a pull request. Changing a dashboard just in the environment, will cause that it will be overwritten by Monaco.
|
||||
- How to generate changes in your dashboards?
|
||||
1. Modify the dashboard within the Dynatrace UI with the intended changes.
|
||||
2. Copy the JSON of the dashboards. (Can be found under the dashboard settings)
|
||||
3. Paste the copied JSON under the Monaco JSON, overwrite it.
|
||||
4. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
### How to configure dashboards?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Dashboards) page to configure your dashboards.
|
||||
|
|
@ -1,76 +1,4 @@
|
|||
## Management Zones configuration
|
||||
|
||||
### Excluding noisy services
|
||||
### How to configure management zones?
|
||||
|
||||
*If you find services that are not relevant for the analysis, you can exclude them from the MZ.*
|
||||
|
||||
#### HealthResource, PingResource, PrometheusResource services
|
||||
|
||||
*After the deployment of the OneAgent, your services should appear under Transactions & Services. A good practice would be to exclude*
|
||||
*the ones that are not relevant for monitoring. i.e. For some BMW's teams, HealthResource, PingResource, PrometheusResource have been excluded.*
|
||||
|
||||
**How to exclude HealthResource?**
|
||||
1. Open the file *default.json* configuration under the *CD_<app_name>/management-zone/* folder.
|
||||
2. Copy the following rule template:
|
||||
```
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
```
|
||||
2. Add it under the `"type": "SERVICE"` rule's conditions. It should look like this:
|
||||
```
|
||||
{
|
||||
"conditions": [
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"negate": false,
|
||||
"operator": "EQUALS",
|
||||
"type": "TAG",
|
||||
"value": {
|
||||
"context": "CONTEXTLESS",
|
||||
"key": "Component",
|
||||
"value": "{{.tag}}"
|
||||
},
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_TAGS"
|
||||
}
|
||||
}
|
||||
],
|
||||
"enabled": true,
|
||||
"propagationTypes": [
|
||||
"SERVICE_TO_PROCESS_GROUP_LIKE",
|
||||
"SERVICE_TO_HOST_LIKE"
|
||||
],
|
||||
"type": "SERVICE"
|
||||
}
|
||||
```
|
||||
|
||||
3. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: you can use the same logic to exclude (or include) any other entity to your Management Zone.
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Management+Zones) page to configure your management zones.
|
||||
|
|
@ -1,60 +1,4 @@
|
|||
|
||||
## Configure Notification System
|
||||
### How to configure notification systems?
|
||||
|
||||
### MS Teams - Default
|
||||
|
||||
*Let's suppose you would like to start receiving alerts from Dynatrace via MS Teams just for your *EMEA PROD*.*
|
||||
|
||||
1. Open *notification.yaml* under your application configuration folder. By default, all notification systems are configured via MS Teams with an
|
||||
https://empty webhook (not configured).
|
||||
2. Create an incoming webhook in MS Teams. [How to?](https://www.dynatrace.com/support/help/shortlink/set-up-msteams-integration#configuration-in-microsoft-teams)
|
||||
3. Add the incoming webhook under the webhook parameter for the `<app_name>-PROD.EMEA-Prod`:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: <Add webhook here>
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: If you want to enable MS Teams for any other hub/stage, follow the same steps but make sure you're under the right configuration:
|
||||
`<app_name>-<stage>.<dynatrace-env>-<stage>:`
|
||||
|
||||
### Email
|
||||
|
||||
*The team prefers to be alerted via email, not MS Teams*
|
||||
|
||||
1. Keep the MS Teams integration disabled, with the https://empty webhook:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: https://empty
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
2. Create a new configuration template under config, using the email template:
|
||||
```
|
||||
config:
|
||||
- CD<app_name>email: email.json
|
||||
```
|
||||
3. Describe the configuration below, using the following template:
|
||||
```
|
||||
CD<app_name>email.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- receivers: distributionEmailexample@bmw.de`
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
|
||||
### ITSM
|
||||
Coming soon!
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Problem+Notification+Integrations) page to configure your notification systems.
|
||||
|
|
@ -1,67 +1,4 @@
|
|||
### Process Group Detection Rules and Naming
|
||||
|
||||
#### Detection Rule or Naming?
|
||||
### How to configure process groups?
|
||||
|
||||
For the explanation, we're using a real example of the Infotainment application:
|
||||
|
||||
!(PGNaming1)[../../../../img/PGNaming1.PNG]
|
||||
|
||||
Before working with your dashboards and alerting profiles, an important task to do when working with Dynatrace is checking
|
||||
the structure of your applications (process groups). You can do that clicking under *technologies* and filter using your
|
||||
application Management Zone.
|
||||
|
||||
In the picture above, there are two Process Groups called bon-information-prod. **If you see duplicated process groups like in**
|
||||
**this case, you MUST follow this guideline**
|
||||
|
||||
Next step would be to open both process groups and compare the metadata. In that way, you can identify if all process instances are
|
||||
part of the same application or not. An easy way to do that is asking yourself: how many instances of my application do i have?
|
||||
|
||||
If you have 4 instances in total and you're able to see 2 in one PG and other 2 in other PG it means that **they are part of the **
|
||||
**same application**
|
||||
|
||||
Another situation could be that checking on the metadata, then you see that are **two different application** and Dynatrace is just naming
|
||||
the process group in the same way
|
||||
|
||||
*Same application*
|
||||
- Problem: Dynatrace is creating two different process groups, what transalates in two separated services for the same application. Instead of
|
||||
seeing all the traffic in one service, you will have it splitted and it will complicate your monitoring
|
||||
- Solution: create a process group detection rule. Contact Dynatrace Expert
|
||||
|
||||
*Different application*
|
||||
- Problem: Dynatrace is just naming in the same way applications that are different.
|
||||
- Solution: This case is less severe, since it can be fixed with a process group naming rule.
|
||||
|
||||
|
||||
What about our example?
|
||||
!(PGNaming2)[../../../../img/PGNaming2.PNG]
|
||||
!(PGNaming3)[../../../../img/PGNaming3.PNG]
|
||||
|
||||
Based on the feedback of the infotaiment team, each process group is a different application (microservice) and it's visible in the kubernetes container/workload
|
||||
within the metadata of each Process Group.
|
||||
|
||||
#### How to create a Process Group Detection Rule
|
||||
1. Open the *conditional-naming-processgroup.yaml* file and create a rule that looks like this:
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
The result of the rule will be renaming the PG to this:
|
||||
```
|
||||
bon-information-prod ipa
|
||||
bon-information-prod rsl
|
||||
```
|
||||
|
||||
Other possible placeholders that you can use are for example:
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName/[^\\-]*$}
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesFullPodName/buffet-(.*?)-}
|
||||
{ProcessGroup:DetectedName} - {HostGroup:Name/[^\\_]*$}
|
||||
{ProcessGroup:KubernetesNamespace}
|
||||
{ProcessGroup:CommandLineArgs/.*?\\-f\\s\\/www\\/(.*?)\\/generated\\/httpd\\.conf.*?}
|
||||
|
||||
You can combine different ones. Check the (documentation)[link] for more
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Process+Group+Naming) page to configure your process groups.
|
||||
|
|
@ -1,35 +1,4 @@
|
|||
### Service Naming Rules
|
||||
|
||||
A typical case could be that you access to *Transaction & Services* and you find two services that are exactly the same:
|
||||
*DataDownloadV1*
|
||||
*DataDownloadV1*
|
||||
### How to configure service naming
|
||||
|
||||
If you drilldown into the service and you check in the process group, you may have a PROD and a E2E for each service.
|
||||
|
||||
*Note: if you see that both process group are exactly the same, please contact a Dynatrace expert to create a Process*
|
||||
*Group detection rule*
|
||||
|
||||
In the case the PG are PROD and E2E, then we need to create a rule that looks like this:
|
||||
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {Service:DetectedName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
|
||||
The rule will get the Service Detected Name (current name) and it will extract (with a regex) the part of the kubernetes namespace after the "-", so -prod or -e2e, resulting in:
|
||||
*DataDownloadV1 - prod*
|
||||
*DataDownloadV1 - e2e*
|
||||
|
||||
Now, services will be easy to identify.
|
||||
|
||||
You can create rules based on any property/metadata. Some other placeholder's eamples:
|
||||
{Service:DatabaseName} - E2E
|
||||
{Service:WebServiceName} - {ProcessGroup:Kubernetes:microservice} - {ProcessGroup:Kubernetes:environment}
|
||||
{Service:DetectedName} - {ProcessGroup:KubernetesContainerName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
{Service:DetectedName} - {ProcessGroup:SpringBootProfileName/[^\\-]*$}
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Service+Naming) page to configure your service naming.
|
||||
|
|
@ -1,13 +1,4 @@
|
|||
## Update dashboard configuration
|
||||
|
||||
- Configuration changes (like in dashboards, alerting profiles) must be done via a pull request. Changing a dashboard just in the environment, will cause that it will be overwritten by Monaco.
|
||||
- How to generate changes in your dashboards?
|
||||
1. Modify the dashboard within the Dynatrace UI with the intended changes.
|
||||
2. Copy the JSON of the dashboards. (Can be found under the dashboard settings)
|
||||
3. Paste the copied JSON under the Monaco JSON, overwrite it.
|
||||
4. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
### How to configure dashboards?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Dashboards) page to configure your dashboards.
|
||||
|
|
@ -1,76 +1,4 @@
|
|||
## Management Zones configuration
|
||||
|
||||
### Excluding noisy services
|
||||
### How to configure management zones?
|
||||
|
||||
*If you find services that are not relevant for the analysis, you can exclude them from the MZ.*
|
||||
|
||||
#### HealthResource, PingResource, PrometheusResource services
|
||||
|
||||
*After the deployment of the OneAgent, your services should appear under Transactions & Services. A good practice would be to exclude*
|
||||
*the ones that are not relevant for monitoring. i.e. For some BMW's teams, HealthResource, PingResource, PrometheusResource have been excluded.*
|
||||
|
||||
**How to exclude HealthResource?**
|
||||
1. Open the file *default.json* configuration under the *CD_<app_name>/management-zone/* folder.
|
||||
2. Copy the following rule template:
|
||||
```
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
```
|
||||
2. Add it under the `"type": "SERVICE"` rule's conditions. It should look like this:
|
||||
```
|
||||
{
|
||||
"conditions": [
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"negate": false,
|
||||
"operator": "EQUALS",
|
||||
"type": "TAG",
|
||||
"value": {
|
||||
"context": "CONTEXTLESS",
|
||||
"key": "Component",
|
||||
"value": "{{.tag}}"
|
||||
},
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_TAGS"
|
||||
}
|
||||
}
|
||||
],
|
||||
"enabled": true,
|
||||
"propagationTypes": [
|
||||
"SERVICE_TO_PROCESS_GROUP_LIKE",
|
||||
"SERVICE_TO_HOST_LIKE"
|
||||
],
|
||||
"type": "SERVICE"
|
||||
}
|
||||
```
|
||||
|
||||
3. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: you can use the same logic to exclude (or include) any other entity to your Management Zone.
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Management+Zones) page to configure your management zones.
|
||||
|
|
@ -1,60 +1,4 @@
|
|||
|
||||
## Configure Notification System
|
||||
### How to configure notification systems?
|
||||
|
||||
### MS Teams - Default
|
||||
|
||||
*Let's suppose you would like to start receiving alerts from Dynatrace via MS Teams just for your *EMEA PROD*.*
|
||||
|
||||
1. Open *notification.yaml* under your application configuration folder. By default, all notification systems are configured via MS Teams with an
|
||||
https://empty webhook (not configured).
|
||||
2. Create an incoming webhook in MS Teams. [How to?](https://www.dynatrace.com/support/help/shortlink/set-up-msteams-integration#configuration-in-microsoft-teams)
|
||||
3. Add the incoming webhook under the webhook parameter for the `<app_name>-PROD.EMEA-Prod`:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: <Add webhook here>
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: If you want to enable MS Teams for any other hub/stage, follow the same steps but make sure you're under the right configuration:
|
||||
`<app_name>-<stage>.<dynatrace-env>-<stage>:`
|
||||
|
||||
### Email
|
||||
|
||||
*The team prefers to be alerted via email, not MS Teams*
|
||||
|
||||
1. Keep the MS Teams integration disabled, with the https://empty webhook:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: https://empty
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
2. Create a new configuration template under config, using the email template:
|
||||
```
|
||||
config:
|
||||
- CD<app_name>email: email.json
|
||||
```
|
||||
3. Describe the configuration below, using the following template:
|
||||
```
|
||||
CD<app_name>email.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- receivers: distributionEmailexample@bmw.de`
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
|
||||
### ITSM
|
||||
Coming soon!
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Problem+Notification+Integrations) page to configure your notification systems.
|
||||
|
|
@ -1,67 +1,4 @@
|
|||
### Process Group Detection Rules and Naming
|
||||
|
||||
#### Detection Rule or Naming?
|
||||
### How to configure process groups?
|
||||
|
||||
For the explanation, we're using a real example of the Infotainment application:
|
||||
|
||||
!(PGNaming1)[../../../../img/PGNaming1.PNG]
|
||||
|
||||
Before working with your dashboards and alerting profiles, an important task to do when working with Dynatrace is checking
|
||||
the structure of your applications (process groups). You can do that clicking under *technologies* and filter using your
|
||||
application Management Zone.
|
||||
|
||||
In the picture above, there are two Process Groups called bon-information-prod. **If you see duplicated process groups like in**
|
||||
**this case, you MUST follow this guideline**
|
||||
|
||||
Next step would be to open both process groups and compare the metadata. In that way, you can identify if all process instances are
|
||||
part of the same application or not. An easy way to do that is asking yourself: how many instances of my application do i have?
|
||||
|
||||
If you have 4 instances in total and you're able to see 2 in one PG and other 2 in other PG it means that **they are part of the **
|
||||
**same application**
|
||||
|
||||
Another situation could be that checking on the metadata, then you see that are **two different application** and Dynatrace is just naming
|
||||
the process group in the same way
|
||||
|
||||
*Same application*
|
||||
- Problem: Dynatrace is creating two different process groups, what transalates in two separated services for the same application. Instead of
|
||||
seeing all the traffic in one service, you will have it splitted and it will complicate your monitoring
|
||||
- Solution: create a process group detection rule. Contact Dynatrace Expert
|
||||
|
||||
*Different application*
|
||||
- Problem: Dynatrace is just naming in the same way applications that are different.
|
||||
- Solution: This case is less severe, since it can be fixed with a process group naming rule.
|
||||
|
||||
|
||||
What about our example?
|
||||
!(PGNaming2)[../../../../img/PGNaming2.PNG]
|
||||
!(PGNaming3)[../../../../img/PGNaming3.PNG]
|
||||
|
||||
Based on the feedback of the infotaiment team, each process group is a different application (microservice) and it's visible in the kubernetes container/workload
|
||||
within the metadata of each Process Group.
|
||||
|
||||
#### How to create a Process Group Detection Rule
|
||||
1. Open the *conditional-naming-processgroup.yaml* file and create a rule that looks like this:
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
The result of the rule will be renaming the PG to this:
|
||||
```
|
||||
bon-information-prod ipa
|
||||
bon-information-prod rsl
|
||||
```
|
||||
|
||||
Other possible placeholders that you can use are for example:
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName/[^\\-]*$}
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesFullPodName/buffet-(.*?)-}
|
||||
{ProcessGroup:DetectedName} - {HostGroup:Name/[^\\_]*$}
|
||||
{ProcessGroup:KubernetesNamespace}
|
||||
{ProcessGroup:CommandLineArgs/.*?\\-f\\s\\/www\\/(.*?)\\/generated\\/httpd\\.conf.*?}
|
||||
|
||||
You can combine different ones. Check the (documentation)[link] for more
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Process+Group+Naming) page to configure your process groups.
|
||||
|
|
@ -1,35 +1,4 @@
|
|||
### Service Naming Rules
|
||||
|
||||
A typical case could be that you access to *Transaction & Services* and you find two services that are exactly the same:
|
||||
*DataDownloadV1*
|
||||
*DataDownloadV1*
|
||||
### How to configure service naming
|
||||
|
||||
If you drilldown into the service and you check in the process group, you may have a PROD and a E2E for each service.
|
||||
|
||||
*Note: if you see that both process group are exactly the same, please contact a Dynatrace expert to create a Process*
|
||||
*Group detection rule*
|
||||
|
||||
In the case the PG are PROD and E2E, then we need to create a rule that looks like this:
|
||||
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {Service:DetectedName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
|
||||
The rule will get the Service Detected Name (current name) and it will extract (with a regex) the part of the kubernetes namespace after the "-", so -prod or -e2e, resulting in:
|
||||
*DataDownloadV1 - prod*
|
||||
*DataDownloadV1 - e2e*
|
||||
|
||||
Now, services will be easy to identify.
|
||||
|
||||
You can create rules based on any property/metadata. Some other placeholder's eamples:
|
||||
{Service:DatabaseName} - E2E
|
||||
{Service:WebServiceName} - {ProcessGroup:Kubernetes:microservice} - {ProcessGroup:Kubernetes:environment}
|
||||
{Service:DetectedName} - {ProcessGroup:KubernetesContainerName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
{Service:DetectedName} - {ProcessGroup:SpringBootProfileName/[^\\-]*$}
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Service+Naming) page to configure your service naming.
|
||||
|
|
@ -1,13 +1,4 @@
|
|||
## Update dashboard configuration
|
||||
|
||||
- Configuration changes (like in dashboards, alerting profiles) must be done via a pull request. Changing a dashboard just in the environment, will cause that it will be overwritten by Monaco.
|
||||
- How to generate changes in your dashboards?
|
||||
1. Modify the dashboard within the Dynatrace UI with the intended changes.
|
||||
2. Copy the JSON of the dashboards. (Can be found under the dashboard settings)
|
||||
3. Paste the copied JSON under the Monaco JSON, overwrite it.
|
||||
4. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
### How to configure dashboards?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Dashboards) page to configure your dashboards.
|
||||
|
|
@ -1,76 +1,4 @@
|
|||
## Management Zones configuration
|
||||
|
||||
### Excluding noisy services
|
||||
### How to configure management zones?
|
||||
|
||||
*If you find services that are not relevant for the analysis, you can exclude them from the MZ.*
|
||||
|
||||
#### HealthResource, PingResource, PrometheusResource services
|
||||
|
||||
*After the deployment of the OneAgent, your services should appear under Transactions & Services. A good practice would be to exclude*
|
||||
*the ones that are not relevant for monitoring. i.e. For some BMW's teams, HealthResource, PingResource, PrometheusResource have been excluded.*
|
||||
|
||||
**How to exclude HealthResource?**
|
||||
1. Open the file *default.json* configuration under the *CD_<app_name>/management-zone/* folder.
|
||||
2. Copy the following rule template:
|
||||
```
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
```
|
||||
2. Add it under the `"type": "SERVICE"` rule's conditions. It should look like this:
|
||||
```
|
||||
{
|
||||
"conditions": [
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"negate": false,
|
||||
"operator": "EQUALS",
|
||||
"type": "TAG",
|
||||
"value": {
|
||||
"context": "CONTEXTLESS",
|
||||
"key": "Component",
|
||||
"value": "{{.tag}}"
|
||||
},
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_TAGS"
|
||||
}
|
||||
}
|
||||
],
|
||||
"enabled": true,
|
||||
"propagationTypes": [
|
||||
"SERVICE_TO_PROCESS_GROUP_LIKE",
|
||||
"SERVICE_TO_HOST_LIKE"
|
||||
],
|
||||
"type": "SERVICE"
|
||||
}
|
||||
```
|
||||
|
||||
3. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: you can use the same logic to exclude (or include) any other entity to your Management Zone.
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Management+Zones) page to configure your management zones.
|
||||
|
|
@ -1,60 +1,4 @@
|
|||
|
||||
## Configure Notification System
|
||||
### How to configure notification systems?
|
||||
|
||||
### MS Teams - Default
|
||||
|
||||
*Let's suppose you would like to start receiving alerts from Dynatrace via MS Teams just for your *EMEA PROD*.*
|
||||
|
||||
1. Open *notification.yaml* under your application configuration folder. By default, all notification systems are configured via MS Teams with an
|
||||
https://empty webhook (not configured).
|
||||
2. Create an incoming webhook in MS Teams. [How to?](https://www.dynatrace.com/support/help/shortlink/set-up-msteams-integration#configuration-in-microsoft-teams)
|
||||
3. Add the incoming webhook under the webhook parameter for the `<app_name>-PROD.EMEA-Prod`:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: <Add webhook here>
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: If you want to enable MS Teams for any other hub/stage, follow the same steps but make sure you're under the right configuration:
|
||||
`<app_name>-<stage>.<dynatrace-env>-<stage>:`
|
||||
|
||||
### Email
|
||||
|
||||
*The team prefers to be alerted via email, not MS Teams*
|
||||
|
||||
1. Keep the MS Teams integration disabled, with the https://empty webhook:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: https://empty
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
2. Create a new configuration template under config, using the email template:
|
||||
```
|
||||
config:
|
||||
- CD<app_name>email: email.json
|
||||
```
|
||||
3. Describe the configuration below, using the following template:
|
||||
```
|
||||
CD<app_name>email.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- receivers: distributionEmailexample@bmw.de`
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
|
||||
### ITSM
|
||||
Coming soon!
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Problem+Notification+Integrations) page to configure your notification systems.
|
||||
|
|
@ -1,67 +1,4 @@
|
|||
### Process Group Detection Rules and Naming
|
||||
|
||||
#### Detection Rule or Naming?
|
||||
### How to configure process groups?
|
||||
|
||||
For the explanation, we're using a real example of the Infotainment application:
|
||||
|
||||
!(PGNaming1)[../../../../img/PGNaming1.PNG]
|
||||
|
||||
Before working with your dashboards and alerting profiles, an important task to do when working with Dynatrace is checking
|
||||
the structure of your applications (process groups). You can do that clicking under *technologies* and filter using your
|
||||
application Management Zone.
|
||||
|
||||
In the picture above, there are two Process Groups called bon-information-prod. **If you see duplicated process groups like in**
|
||||
**this case, you MUST follow this guideline**
|
||||
|
||||
Next step would be to open both process groups and compare the metadata. In that way, you can identify if all process instances are
|
||||
part of the same application or not. An easy way to do that is asking yourself: how many instances of my application do i have?
|
||||
|
||||
If you have 4 instances in total and you're able to see 2 in one PG and other 2 in other PG it means that **they are part of the **
|
||||
**same application**
|
||||
|
||||
Another situation could be that checking on the metadata, then you see that are **two different application** and Dynatrace is just naming
|
||||
the process group in the same way
|
||||
|
||||
*Same application*
|
||||
- Problem: Dynatrace is creating two different process groups, what transalates in two separated services for the same application. Instead of
|
||||
seeing all the traffic in one service, you will have it splitted and it will complicate your monitoring
|
||||
- Solution: create a process group detection rule. Contact Dynatrace Expert
|
||||
|
||||
*Different application*
|
||||
- Problem: Dynatrace is just naming in the same way applications that are different.
|
||||
- Solution: This case is less severe, since it can be fixed with a process group naming rule.
|
||||
|
||||
|
||||
What about our example?
|
||||
!(PGNaming2)[../../../../img/PGNaming2.PNG]
|
||||
!(PGNaming3)[../../../../img/PGNaming3.PNG]
|
||||
|
||||
Based on the feedback of the infotaiment team, each process group is a different application (microservice) and it's visible in the kubernetes container/workload
|
||||
within the metadata of each Process Group.
|
||||
|
||||
#### How to create a Process Group Detection Rule
|
||||
1. Open the *conditional-naming-processgroup.yaml* file and create a rule that looks like this:
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
The result of the rule will be renaming the PG to this:
|
||||
```
|
||||
bon-information-prod ipa
|
||||
bon-information-prod rsl
|
||||
```
|
||||
|
||||
Other possible placeholders that you can use are for example:
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName/[^\\-]*$}
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesFullPodName/buffet-(.*?)-}
|
||||
{ProcessGroup:DetectedName} - {HostGroup:Name/[^\\_]*$}
|
||||
{ProcessGroup:KubernetesNamespace}
|
||||
{ProcessGroup:CommandLineArgs/.*?\\-f\\s\\/www\\/(.*?)\\/generated\\/httpd\\.conf.*?}
|
||||
|
||||
You can combine different ones. Check the (documentation)[link] for more
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Process+Group+Naming) page to configure your process groups.
|
||||
|
|
@ -1,35 +1,4 @@
|
|||
### Service Naming Rules
|
||||
|
||||
A typical case could be that you access to *Transaction & Services* and you find two services that are exactly the same:
|
||||
*DataDownloadV1*
|
||||
*DataDownloadV1*
|
||||
### How to configure service naming
|
||||
|
||||
If you drilldown into the service and you check in the process group, you may have a PROD and a E2E for each service.
|
||||
|
||||
*Note: if you see that both process group are exactly the same, please contact a Dynatrace expert to create a Process*
|
||||
*Group detection rule*
|
||||
|
||||
In the case the PG are PROD and E2E, then we need to create a rule that looks like this:
|
||||
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {Service:DetectedName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
|
||||
The rule will get the Service Detected Name (current name) and it will extract (with a regex) the part of the kubernetes namespace after the "-", so -prod or -e2e, resulting in:
|
||||
*DataDownloadV1 - prod*
|
||||
*DataDownloadV1 - e2e*
|
||||
|
||||
Now, services will be easy to identify.
|
||||
|
||||
You can create rules based on any property/metadata. Some other placeholder's eamples:
|
||||
{Service:DatabaseName} - E2E
|
||||
{Service:WebServiceName} - {ProcessGroup:Kubernetes:microservice} - {ProcessGroup:Kubernetes:environment}
|
||||
{Service:DetectedName} - {ProcessGroup:KubernetesContainerName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
{Service:DetectedName} - {ProcessGroup:SpringBootProfileName/[^\\-]*$}
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Service+Naming) page to configure your service naming.
|
||||
|
|
@ -1,13 +1,4 @@
|
|||
## Update dashboard configuration
|
||||
|
||||
- Configuration changes (like in dashboards, alerting profiles) must be done via a pull request. Changing a dashboard just in the environment, will cause that it will be overwritten by Monaco.
|
||||
- How to generate changes in your dashboards?
|
||||
1. Modify the dashboard within the Dynatrace UI with the intended changes.
|
||||
2. Copy the JSON of the dashboards. (Can be found under the dashboard settings)
|
||||
3. Paste the copied JSON under the Monaco JSON, overwrite it.
|
||||
4. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
### How to configure dashboards?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Dashboards) page to configure your dashboards.
|
||||
|
|
@ -1,76 +1,4 @@
|
|||
## Management Zones configuration
|
||||
|
||||
### Excluding noisy services
|
||||
### How to configure management zones?
|
||||
|
||||
*If you find services that are not relevant for the analysis, you can exclude them from the MZ.*
|
||||
|
||||
#### HealthResource, PingResource, PrometheusResource services
|
||||
|
||||
*After the deployment of the OneAgent, your services should appear under Transactions & Services. A good practice would be to exclude*
|
||||
*the ones that are not relevant for monitoring. i.e. For some BMW's teams, HealthResource, PingResource, PrometheusResource have been excluded.*
|
||||
|
||||
**How to exclude HealthResource?**
|
||||
1. Open the file *default.json* configuration under the *CD_<app_name>/management-zone/* folder.
|
||||
2. Copy the following rule template:
|
||||
```
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
```
|
||||
2. Add it under the `"type": "SERVICE"` rule's conditions. It should look like this:
|
||||
```
|
||||
{
|
||||
"conditions": [
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"negate": false,
|
||||
"operator": "EQUALS",
|
||||
"type": "TAG",
|
||||
"value": {
|
||||
"context": "CONTEXTLESS",
|
||||
"key": "Component",
|
||||
"value": "{{.tag}}"
|
||||
},
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_TAGS"
|
||||
}
|
||||
}
|
||||
],
|
||||
"enabled": true,
|
||||
"propagationTypes": [
|
||||
"SERVICE_TO_PROCESS_GROUP_LIKE",
|
||||
"SERVICE_TO_HOST_LIKE"
|
||||
],
|
||||
"type": "SERVICE"
|
||||
}
|
||||
```
|
||||
|
||||
3. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: you can use the same logic to exclude (or include) any other entity to your Management Zone.
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Management+Zones) page to configure your management zones.
|
||||
|
|
@ -1,60 +1,4 @@
|
|||
|
||||
## Configure Notification System
|
||||
### How to configure notification systems?
|
||||
|
||||
### MS Teams - Default
|
||||
|
||||
*Let's suppose you would like to start receiving alerts from Dynatrace via MS Teams just for your *EMEA PROD*.*
|
||||
|
||||
1. Open *notification.yaml* under your application configuration folder. By default, all notification systems are configured via MS Teams with an
|
||||
https://empty webhook (not configured).
|
||||
2. Create an incoming webhook in MS Teams. [How to?](https://www.dynatrace.com/support/help/shortlink/set-up-msteams-integration#configuration-in-microsoft-teams)
|
||||
3. Add the incoming webhook under the webhook parameter for the `<app_name>-PROD.EMEA-Prod`:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: <Add webhook here>
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: If you want to enable MS Teams for any other hub/stage, follow the same steps but make sure you're under the right configuration:
|
||||
`<app_name>-<stage>.<dynatrace-env>-<stage>:`
|
||||
|
||||
### Email
|
||||
|
||||
*The team prefers to be alerted via email, not MS Teams*
|
||||
|
||||
1. Keep the MS Teams integration disabled, with the https://empty webhook:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: https://empty
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
2. Create a new configuration template under config, using the email template:
|
||||
```
|
||||
config:
|
||||
- CD<app_name>email: email.json
|
||||
```
|
||||
3. Describe the configuration below, using the following template:
|
||||
```
|
||||
CD<app_name>email.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- receivers: distributionEmailexample@bmw.de`
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
|
||||
### ITSM
|
||||
Coming soon!
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Problem+Notification+Integrations) page to configure your notification systems.
|
||||
|
|
@ -1,67 +1,4 @@
|
|||
### Process Group Detection Rules and Naming
|
||||
|
||||
#### Detection Rule or Naming?
|
||||
### How to configure process groups?
|
||||
|
||||
For the explanation, we're using a real example of the Infotainment application:
|
||||
|
||||
!(PGNaming1)[../../../../img/PGNaming1.PNG]
|
||||
|
||||
Before working with your dashboards and alerting profiles, an important task to do when working with Dynatrace is checking
|
||||
the structure of your applications (process groups). You can do that clicking under *technologies* and filter using your
|
||||
application Management Zone.
|
||||
|
||||
In the picture above, there are two Process Groups called bon-information-prod. **If you see duplicated process groups like in**
|
||||
**this case, you MUST follow this guideline**
|
||||
|
||||
Next step would be to open both process groups and compare the metadata. In that way, you can identify if all process instances are
|
||||
part of the same application or not. An easy way to do that is asking yourself: how many instances of my application do i have?
|
||||
|
||||
If you have 4 instances in total and you're able to see 2 in one PG and other 2 in other PG it means that **they are part of the **
|
||||
**same application**
|
||||
|
||||
Another situation could be that checking on the metadata, then you see that are **two different application** and Dynatrace is just naming
|
||||
the process group in the same way
|
||||
|
||||
*Same application*
|
||||
- Problem: Dynatrace is creating two different process groups, what transalates in two separated services for the same application. Instead of
|
||||
seeing all the traffic in one service, you will have it splitted and it will complicate your monitoring
|
||||
- Solution: create a process group detection rule. Contact Dynatrace Expert
|
||||
|
||||
*Different application*
|
||||
- Problem: Dynatrace is just naming in the same way applications that are different.
|
||||
- Solution: This case is less severe, since it can be fixed with a process group naming rule.
|
||||
|
||||
|
||||
What about our example?
|
||||
!(PGNaming2)[../../../../img/PGNaming2.PNG]
|
||||
!(PGNaming3)[../../../../img/PGNaming3.PNG]
|
||||
|
||||
Based on the feedback of the infotaiment team, each process group is a different application (microservice) and it's visible in the kubernetes container/workload
|
||||
within the metadata of each Process Group.
|
||||
|
||||
#### How to create a Process Group Detection Rule
|
||||
1. Open the *conditional-naming-processgroup.yaml* file and create a rule that looks like this:
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
The result of the rule will be renaming the PG to this:
|
||||
```
|
||||
bon-information-prod ipa
|
||||
bon-information-prod rsl
|
||||
```
|
||||
|
||||
Other possible placeholders that you can use are for example:
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName/[^\\-]*$}
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesFullPodName/buffet-(.*?)-}
|
||||
{ProcessGroup:DetectedName} - {HostGroup:Name/[^\\_]*$}
|
||||
{ProcessGroup:KubernetesNamespace}
|
||||
{ProcessGroup:CommandLineArgs/.*?\\-f\\s\\/www\\/(.*?)\\/generated\\/httpd\\.conf.*?}
|
||||
|
||||
You can combine different ones. Check the (documentation)[link] for more
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Process+Group+Naming) page to configure your process groups.
|
||||
|
|
@ -1,35 +1,4 @@
|
|||
### Service Naming Rules
|
||||
|
||||
A typical case could be that you access to *Transaction & Services* and you find two services that are exactly the same:
|
||||
*DataDownloadV1*
|
||||
*DataDownloadV1*
|
||||
### How to configure service naming
|
||||
|
||||
If you drilldown into the service and you check in the process group, you may have a PROD and a E2E for each service.
|
||||
|
||||
*Note: if you see that both process group are exactly the same, please contact a Dynatrace expert to create a Process*
|
||||
*Group detection rule*
|
||||
|
||||
In the case the PG are PROD and E2E, then we need to create a rule that looks like this:
|
||||
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {Service:DetectedName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
|
||||
The rule will get the Service Detected Name (current name) and it will extract (with a regex) the part of the kubernetes namespace after the "-", so -prod or -e2e, resulting in:
|
||||
*DataDownloadV1 - prod*
|
||||
*DataDownloadV1 - e2e*
|
||||
|
||||
Now, services will be easy to identify.
|
||||
|
||||
You can create rules based on any property/metadata. Some other placeholder's eamples:
|
||||
{Service:DatabaseName} - E2E
|
||||
{Service:WebServiceName} - {ProcessGroup:Kubernetes:microservice} - {ProcessGroup:Kubernetes:environment}
|
||||
{Service:DetectedName} - {ProcessGroup:KubernetesContainerName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
{Service:DetectedName} - {ProcessGroup:SpringBootProfileName/[^\\-]*$}
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Service+Naming) page to configure your service naming.
|
||||
|
|
@ -1,13 +1,4 @@
|
|||
## Update dashboard configuration
|
||||
|
||||
- Configuration changes (like in dashboards, alerting profiles) must be done via a pull request. Changing a dashboard just in the environment, will cause that it will be overwritten by Monaco.
|
||||
- How to generate changes in your dashboards?
|
||||
1. Modify the dashboard within the Dynatrace UI with the intended changes.
|
||||
2. Copy the JSON of the dashboards. (Can be found under the dashboard settings)
|
||||
3. Paste the copied JSON under the Monaco JSON, overwrite it.
|
||||
4. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
### How to configure dashboards?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Dashboards) page to configure your dashboards.
|
||||
|
|
@ -1,76 +1,4 @@
|
|||
## Management Zones configuration
|
||||
|
||||
### Excluding noisy services
|
||||
### How to configure management zones?
|
||||
|
||||
*If you find services that are not relevant for the analysis, you can exclude them from the MZ.*
|
||||
|
||||
#### HealthResource, PingResource, PrometheusResource services
|
||||
|
||||
*After the deployment of the OneAgent, your services should appear under Transactions & Services. A good practice would be to exclude*
|
||||
*the ones that are not relevant for monitoring. i.e. For some BMW's teams, HealthResource, PingResource, PrometheusResource have been excluded.*
|
||||
|
||||
**How to exclude HealthResource?**
|
||||
1. Open the file *default.json* configuration under the *CD_<app_name>/management-zone/* folder.
|
||||
2. Copy the following rule template:
|
||||
```
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
```
|
||||
2. Add it under the `"type": "SERVICE"` rule's conditions. It should look like this:
|
||||
```
|
||||
{
|
||||
"conditions": [
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"negate": false,
|
||||
"operator": "EQUALS",
|
||||
"type": "TAG",
|
||||
"value": {
|
||||
"context": "CONTEXTLESS",
|
||||
"key": "Component",
|
||||
"value": "{{.tag}}"
|
||||
},
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_TAGS"
|
||||
}
|
||||
}
|
||||
],
|
||||
"enabled": true,
|
||||
"propagationTypes": [
|
||||
"SERVICE_TO_PROCESS_GROUP_LIKE",
|
||||
"SERVICE_TO_HOST_LIKE"
|
||||
],
|
||||
"type": "SERVICE"
|
||||
}
|
||||
```
|
||||
|
||||
3. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: you can use the same logic to exclude (or include) any other entity to your Management Zone.
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Management+Zones) page to configure your management zones.
|
||||
|
|
@ -1,60 +1,4 @@
|
|||
|
||||
## Configure Notification System
|
||||
### How to configure notification systems?
|
||||
|
||||
### MS Teams - Default
|
||||
|
||||
*Let's suppose you would like to start receiving alerts from Dynatrace via MS Teams just for your *EMEA PROD*.*
|
||||
|
||||
1. Open *notification.yaml* under your application configuration folder. By default, all notification systems are configured via MS Teams with an
|
||||
https://empty webhook (not configured).
|
||||
2. Create an incoming webhook in MS Teams. [How to?](https://www.dynatrace.com/support/help/shortlink/set-up-msteams-integration#configuration-in-microsoft-teams)
|
||||
3. Add the incoming webhook under the webhook parameter for the `<app_name>-PROD.EMEA-Prod`:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: <Add webhook here>
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: If you want to enable MS Teams for any other hub/stage, follow the same steps but make sure you're under the right configuration:
|
||||
`<app_name>-<stage>.<dynatrace-env>-<stage>:`
|
||||
|
||||
### Email
|
||||
|
||||
*The team prefers to be alerted via email, not MS Teams*
|
||||
|
||||
1. Keep the MS Teams integration disabled, with the https://empty webhook:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: https://empty
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
2. Create a new configuration template under config, using the email template:
|
||||
```
|
||||
config:
|
||||
- CD<app_name>email: email.json
|
||||
```
|
||||
3. Describe the configuration below, using the following template:
|
||||
```
|
||||
CD<app_name>email.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- receivers: distributionEmailexample@bmw.de`
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
|
||||
### ITSM
|
||||
Coming soon!
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Problem+Notification+Integrations) page to configure your notification systems.
|
||||
|
|
@ -1,67 +1,4 @@
|
|||
### Process Group Detection Rules and Naming
|
||||
|
||||
#### Detection Rule or Naming?
|
||||
### How to configure process groups?
|
||||
|
||||
For the explanation, we're using a real example of the Infotainment application:
|
||||
|
||||
!(PGNaming1)[../../../../img/PGNaming1.PNG]
|
||||
|
||||
Before working with your dashboards and alerting profiles, an important task to do when working with Dynatrace is checking
|
||||
the structure of your applications (process groups). You can do that clicking under *technologies* and filter using your
|
||||
application Management Zone.
|
||||
|
||||
In the picture above, there are two Process Groups called bon-information-prod. **If you see duplicated process groups like in**
|
||||
**this case, you MUST follow this guideline**
|
||||
|
||||
Next step would be to open both process groups and compare the metadata. In that way, you can identify if all process instances are
|
||||
part of the same application or not. An easy way to do that is asking yourself: how many instances of my application do i have?
|
||||
|
||||
If you have 4 instances in total and you're able to see 2 in one PG and other 2 in other PG it means that **they are part of the **
|
||||
**same application**
|
||||
|
||||
Another situation could be that checking on the metadata, then you see that are **two different application** and Dynatrace is just naming
|
||||
the process group in the same way
|
||||
|
||||
*Same application*
|
||||
- Problem: Dynatrace is creating two different process groups, what transalates in two separated services for the same application. Instead of
|
||||
seeing all the traffic in one service, you will have it splitted and it will complicate your monitoring
|
||||
- Solution: create a process group detection rule. Contact Dynatrace Expert
|
||||
|
||||
*Different application*
|
||||
- Problem: Dynatrace is just naming in the same way applications that are different.
|
||||
- Solution: This case is less severe, since it can be fixed with a process group naming rule.
|
||||
|
||||
|
||||
What about our example?
|
||||
!(PGNaming2)[../../../../img/PGNaming2.PNG]
|
||||
!(PGNaming3)[../../../../img/PGNaming3.PNG]
|
||||
|
||||
Based on the feedback of the infotaiment team, each process group is a different application (microservice) and it's visible in the kubernetes container/workload
|
||||
within the metadata of each Process Group.
|
||||
|
||||
#### How to create a Process Group Detection Rule
|
||||
1. Open the *conditional-naming-processgroup.yaml* file and create a rule that looks like this:
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
The result of the rule will be renaming the PG to this:
|
||||
```
|
||||
bon-information-prod ipa
|
||||
bon-information-prod rsl
|
||||
```
|
||||
|
||||
Other possible placeholders that you can use are for example:
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName/[^\\-]*$}
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesFullPodName/buffet-(.*?)-}
|
||||
{ProcessGroup:DetectedName} - {HostGroup:Name/[^\\_]*$}
|
||||
{ProcessGroup:KubernetesNamespace}
|
||||
{ProcessGroup:CommandLineArgs/.*?\\-f\\s\\/www\\/(.*?)\\/generated\\/httpd\\.conf.*?}
|
||||
|
||||
You can combine different ones. Check the (documentation)[link] for more
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Process+Group+Naming) page to configure your process groups.
|
||||
|
|
@ -1,35 +1,4 @@
|
|||
### Service Naming Rules
|
||||
|
||||
A typical case could be that you access to *Transaction & Services* and you find two services that are exactly the same:
|
||||
*DataDownloadV1*
|
||||
*DataDownloadV1*
|
||||
### How to configure service naming
|
||||
|
||||
If you drilldown into the service and you check in the process group, you may have a PROD and a E2E for each service.
|
||||
|
||||
*Note: if you see that both process group are exactly the same, please contact a Dynatrace expert to create a Process*
|
||||
*Group detection rule*
|
||||
|
||||
In the case the PG are PROD and E2E, then we need to create a rule that looks like this:
|
||||
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {Service:DetectedName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
|
||||
The rule will get the Service Detected Name (current name) and it will extract (with a regex) the part of the kubernetes namespace after the "-", so -prod or -e2e, resulting in:
|
||||
*DataDownloadV1 - prod*
|
||||
*DataDownloadV1 - e2e*
|
||||
|
||||
Now, services will be easy to identify.
|
||||
|
||||
You can create rules based on any property/metadata. Some other placeholder's eamples:
|
||||
{Service:DatabaseName} - E2E
|
||||
{Service:WebServiceName} - {ProcessGroup:Kubernetes:microservice} - {ProcessGroup:Kubernetes:environment}
|
||||
{Service:DetectedName} - {ProcessGroup:KubernetesContainerName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
{Service:DetectedName} - {ProcessGroup:SpringBootProfileName/[^\\-]*$}
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Service+Naming) page to configure your service naming.
|
||||
|
|
@ -1,13 +1,4 @@
|
|||
## Update dashboard configuration
|
||||
|
||||
- Configuration changes (like in dashboards, alerting profiles) must be done via a pull request. Changing a dashboard just in the environment, will cause that it will be overwritten by Monaco.
|
||||
- How to generate changes in your dashboards?
|
||||
1. Modify the dashboard within the Dynatrace UI with the intended changes.
|
||||
2. Copy the JSON of the dashboards. (Can be found under the dashboard settings)
|
||||
3. Paste the copied JSON under the Monaco JSON, overwrite it.
|
||||
4. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
### How to configure dashboards?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Dashboards) page to configure your dashboards.
|
||||
|
|
@ -1,76 +1,4 @@
|
|||
## Management Zones configuration
|
||||
|
||||
### Excluding noisy services
|
||||
### How to configure management zones?
|
||||
|
||||
*If you find services that are not relevant for the analysis, you can exclude them from the MZ.*
|
||||
|
||||
#### HealthResource, PingResource, PrometheusResource services
|
||||
|
||||
*After the deployment of the OneAgent, your services should appear under Transactions & Services. A good practice would be to exclude*
|
||||
*the ones that are not relevant for monitoring. i.e. For some BMW's teams, HealthResource, PingResource, PrometheusResource have been excluded.*
|
||||
|
||||
**How to exclude HealthResource?**
|
||||
1. Open the file *default.json* configuration under the *CD_<app_name>/management-zone/* folder.
|
||||
2. Copy the following rule template:
|
||||
```
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
```
|
||||
2. Add it under the `"type": "SERVICE"` rule's conditions. It should look like this:
|
||||
```
|
||||
{
|
||||
"conditions": [
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"negate": false,
|
||||
"operator": "EQUALS",
|
||||
"type": "TAG",
|
||||
"value": {
|
||||
"context": "CONTEXTLESS",
|
||||
"key": "Component",
|
||||
"value": "{{.tag}}"
|
||||
},
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_TAGS"
|
||||
}
|
||||
}
|
||||
],
|
||||
"enabled": true,
|
||||
"propagationTypes": [
|
||||
"SERVICE_TO_PROCESS_GROUP_LIKE",
|
||||
"SERVICE_TO_HOST_LIKE"
|
||||
],
|
||||
"type": "SERVICE"
|
||||
}
|
||||
```
|
||||
|
||||
3. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: you can use the same logic to exclude (or include) any other entity to your Management Zone.
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Management+Zones) page to configure your management zones.
|
||||
|
|
@ -1,60 +1,4 @@
|
|||
|
||||
## Configure Notification System
|
||||
### How to configure notification systems?
|
||||
|
||||
### MS Teams - Default
|
||||
|
||||
*Let's suppose you would like to start receiving alerts from Dynatrace via MS Teams just for your *EMEA PROD*.*
|
||||
|
||||
1. Open *notification.yaml* under your application configuration folder. By default, all notification systems are configured via MS Teams with an
|
||||
https://empty webhook (not configured).
|
||||
2. Create an incoming webhook in MS Teams. [How to?](https://www.dynatrace.com/support/help/shortlink/set-up-msteams-integration#configuration-in-microsoft-teams)
|
||||
3. Add the incoming webhook under the webhook parameter for the `<app_name>-PROD.EMEA-Prod`:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: <Add webhook here>
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: If you want to enable MS Teams for any other hub/stage, follow the same steps but make sure you're under the right configuration:
|
||||
`<app_name>-<stage>.<dynatrace-env>-<stage>:`
|
||||
|
||||
### Email
|
||||
|
||||
*The team prefers to be alerted via email, not MS Teams*
|
||||
|
||||
1. Keep the MS Teams integration disabled, with the https://empty webhook:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: https://empty
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
2. Create a new configuration template under config, using the email template:
|
||||
```
|
||||
config:
|
||||
- CD<app_name>email: email.json
|
||||
```
|
||||
3. Describe the configuration below, using the following template:
|
||||
```
|
||||
CD<app_name>email.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- receivers: distributionEmailexample@bmw.de`
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
|
||||
### ITSM
|
||||
Coming soon!
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Problem+Notification+Integrations) page to configure your notification systems.
|
||||
|
|
@ -1,67 +1,4 @@
|
|||
### Process Group Detection Rules and Naming
|
||||
|
||||
#### Detection Rule or Naming?
|
||||
### How to configure process groups?
|
||||
|
||||
For the explanation, we're using a real example of the Infotainment application:
|
||||
|
||||
!(PGNaming1)[../../../../img/PGNaming1.PNG]
|
||||
|
||||
Before working with your dashboards and alerting profiles, an important task to do when working with Dynatrace is checking
|
||||
the structure of your applications (process groups). You can do that clicking under *technologies* and filter using your
|
||||
application Management Zone.
|
||||
|
||||
In the picture above, there are two Process Groups called bon-information-prod. **If you see duplicated process groups like in**
|
||||
**this case, you MUST follow this guideline**
|
||||
|
||||
Next step would be to open both process groups and compare the metadata. In that way, you can identify if all process instances are
|
||||
part of the same application or not. An easy way to do that is asking yourself: how many instances of my application do i have?
|
||||
|
||||
If you have 4 instances in total and you're able to see 2 in one PG and other 2 in other PG it means that **they are part of the **
|
||||
**same application**
|
||||
|
||||
Another situation could be that checking on the metadata, then you see that are **two different application** and Dynatrace is just naming
|
||||
the process group in the same way
|
||||
|
||||
*Same application*
|
||||
- Problem: Dynatrace is creating two different process groups, what transalates in two separated services for the same application. Instead of
|
||||
seeing all the traffic in one service, you will have it splitted and it will complicate your monitoring
|
||||
- Solution: create a process group detection rule. Contact Dynatrace Expert
|
||||
|
||||
*Different application*
|
||||
- Problem: Dynatrace is just naming in the same way applications that are different.
|
||||
- Solution: This case is less severe, since it can be fixed with a process group naming rule.
|
||||
|
||||
|
||||
What about our example?
|
||||
!(PGNaming2)[../../../../img/PGNaming2.PNG]
|
||||
!(PGNaming3)[../../../../img/PGNaming3.PNG]
|
||||
|
||||
Based on the feedback of the infotaiment team, each process group is a different application (microservice) and it's visible in the kubernetes container/workload
|
||||
within the metadata of each Process Group.
|
||||
|
||||
#### How to create a Process Group Detection Rule
|
||||
1. Open the *conditional-naming-processgroup.yaml* file and create a rule that looks like this:
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
The result of the rule will be renaming the PG to this:
|
||||
```
|
||||
bon-information-prod ipa
|
||||
bon-information-prod rsl
|
||||
```
|
||||
|
||||
Other possible placeholders that you can use are for example:
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName/[^\\-]*$}
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesFullPodName/buffet-(.*?)-}
|
||||
{ProcessGroup:DetectedName} - {HostGroup:Name/[^\\_]*$}
|
||||
{ProcessGroup:KubernetesNamespace}
|
||||
{ProcessGroup:CommandLineArgs/.*?\\-f\\s\\/www\\/(.*?)\\/generated\\/httpd\\.conf.*?}
|
||||
|
||||
You can combine different ones. Check the (documentation)[link] for more
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Process+Group+Naming) page to configure your process groups.
|
||||
|
|
@ -1,35 +1,4 @@
|
|||
### Service Naming Rules
|
||||
|
||||
A typical case could be that you access to *Transaction & Services* and you find two services that are exactly the same:
|
||||
*DataDownloadV1*
|
||||
*DataDownloadV1*
|
||||
### How to configure service naming
|
||||
|
||||
If you drilldown into the service and you check in the process group, you may have a PROD and a E2E for each service.
|
||||
|
||||
*Note: if you see that both process group are exactly the same, please contact a Dynatrace expert to create a Process*
|
||||
*Group detection rule*
|
||||
|
||||
In the case the PG are PROD and E2E, then we need to create a rule that looks like this:
|
||||
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {Service:DetectedName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
|
||||
The rule will get the Service Detected Name (current name) and it will extract (with a regex) the part of the kubernetes namespace after the "-", so -prod or -e2e, resulting in:
|
||||
*DataDownloadV1 - prod*
|
||||
*DataDownloadV1 - e2e*
|
||||
|
||||
Now, services will be easy to identify.
|
||||
|
||||
You can create rules based on any property/metadata. Some other placeholder's eamples:
|
||||
{Service:DatabaseName} - E2E
|
||||
{Service:WebServiceName} - {ProcessGroup:Kubernetes:microservice} - {ProcessGroup:Kubernetes:environment}
|
||||
{Service:DetectedName} - {ProcessGroup:KubernetesContainerName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
{Service:DetectedName} - {ProcessGroup:SpringBootProfileName/[^\\-]*$}
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Service+Naming) page to configure your service naming.
|
||||
|
|
@ -1,13 +1,4 @@
|
|||
## Update dashboard configuration
|
||||
|
||||
- Configuration changes (like in dashboards, alerting profiles) must be done via a pull request. Changing a dashboard just in the environment, will cause that it will be overwritten by Monaco.
|
||||
- How to generate changes in your dashboards?
|
||||
1. Modify the dashboard within the Dynatrace UI with the intended changes.
|
||||
2. Copy the JSON of the dashboards. (Can be found under the dashboard settings)
|
||||
3. Paste the copied JSON under the Monaco JSON, overwrite it.
|
||||
4. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
### How to configure dashboards?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Dashboards) page to configure your dashboards.
|
||||
|
|
@ -1,76 +1,4 @@
|
|||
## Management Zones configuration
|
||||
|
||||
### Excluding noisy services
|
||||
### How to configure management zones?
|
||||
|
||||
*If you find services that are not relevant for the analysis, you can exclude them from the MZ.*
|
||||
|
||||
#### HealthResource, PingResource, PrometheusResource services
|
||||
|
||||
*After the deployment of the OneAgent, your services should appear under Transactions & Services. A good practice would be to exclude*
|
||||
*the ones that are not relevant for monitoring. i.e. For some BMW's teams, HealthResource, PingResource, PrometheusResource have been excluded.*
|
||||
|
||||
**How to exclude HealthResource?**
|
||||
1. Open the file *default.json* configuration under the *CD_<app_name>/management-zone/* folder.
|
||||
2. Copy the following rule template:
|
||||
```
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
```
|
||||
2. Add it under the `"type": "SERVICE"` rule's conditions. It should look like this:
|
||||
```
|
||||
{
|
||||
"conditions": [
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"negate": false,
|
||||
"operator": "EQUALS",
|
||||
"type": "TAG",
|
||||
"value": {
|
||||
"context": "CONTEXTLESS",
|
||||
"key": "Component",
|
||||
"value": "{{.tag}}"
|
||||
},
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_TAGS"
|
||||
}
|
||||
}
|
||||
],
|
||||
"enabled": true,
|
||||
"propagationTypes": [
|
||||
"SERVICE_TO_PROCESS_GROUP_LIKE",
|
||||
"SERVICE_TO_HOST_LIKE"
|
||||
],
|
||||
"type": "SERVICE"
|
||||
}
|
||||
```
|
||||
|
||||
3. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: you can use the same logic to exclude (or include) any other entity to your Management Zone.
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Management+Zones) page to configure your management zones.
|
||||
|
|
@ -1,60 +1,4 @@
|
|||
|
||||
## Configure Notification System
|
||||
### How to configure notification systems?
|
||||
|
||||
### MS Teams - Default
|
||||
|
||||
*Let's suppose you would like to start receiving alerts from Dynatrace via MS Teams just for your *EMEA PROD*.*
|
||||
|
||||
1. Open *notification.yaml* under your application configuration folder. By default, all notification systems are configured via MS Teams with an
|
||||
https://empty webhook (not configured).
|
||||
2. Create an incoming webhook in MS Teams. [How to?](https://www.dynatrace.com/support/help/shortlink/set-up-msteams-integration#configuration-in-microsoft-teams)
|
||||
3. Add the incoming webhook under the webhook parameter for the `<app_name>-PROD.EMEA-Prod`:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: <Add webhook here>
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: If you want to enable MS Teams for any other hub/stage, follow the same steps but make sure you're under the right configuration:
|
||||
`<app_name>-<stage>.<dynatrace-env>-<stage>:`
|
||||
|
||||
### Email
|
||||
|
||||
*The team prefers to be alerted via email, not MS Teams*
|
||||
|
||||
1. Keep the MS Teams integration disabled, with the https://empty webhook:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: https://empty
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
2. Create a new configuration template under config, using the email template:
|
||||
```
|
||||
config:
|
||||
- CD<app_name>email: email.json
|
||||
```
|
||||
3. Describe the configuration below, using the following template:
|
||||
```
|
||||
CD<app_name>email.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- receivers: distributionEmailexample@bmw.de`
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
|
||||
### ITSM
|
||||
Coming soon!
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Problem+Notification+Integrations) page to configure your notification systems.
|
||||
|
|
@ -1,67 +1,4 @@
|
|||
### Process Group Detection Rules and Naming
|
||||
|
||||
#### Detection Rule or Naming?
|
||||
### How to configure process groups?
|
||||
|
||||
For the explanation, we're using a real example of the Infotainment application:
|
||||
|
||||
!(PGNaming1)[../../../../img/PGNaming1.PNG]
|
||||
|
||||
Before working with your dashboards and alerting profiles, an important task to do when working with Dynatrace is checking
|
||||
the structure of your applications (process groups). You can do that clicking under *technologies* and filter using your
|
||||
application Management Zone.
|
||||
|
||||
In the picture above, there are two Process Groups called bon-information-prod. **If you see duplicated process groups like in**
|
||||
**this case, you MUST follow this guideline**
|
||||
|
||||
Next step would be to open both process groups and compare the metadata. In that way, you can identify if all process instances are
|
||||
part of the same application or not. An easy way to do that is asking yourself: how many instances of my application do i have?
|
||||
|
||||
If you have 4 instances in total and you're able to see 2 in one PG and other 2 in other PG it means that **they are part of the **
|
||||
**same application**
|
||||
|
||||
Another situation could be that checking on the metadata, then you see that are **two different application** and Dynatrace is just naming
|
||||
the process group in the same way
|
||||
|
||||
*Same application*
|
||||
- Problem: Dynatrace is creating two different process groups, what transalates in two separated services for the same application. Instead of
|
||||
seeing all the traffic in one service, you will have it splitted and it will complicate your monitoring
|
||||
- Solution: create a process group detection rule. Contact Dynatrace Expert
|
||||
|
||||
*Different application*
|
||||
- Problem: Dynatrace is just naming in the same way applications that are different.
|
||||
- Solution: This case is less severe, since it can be fixed with a process group naming rule.
|
||||
|
||||
|
||||
What about our example?
|
||||
!(PGNaming2)[../../../../img/PGNaming2.PNG]
|
||||
!(PGNaming3)[../../../../img/PGNaming3.PNG]
|
||||
|
||||
Based on the feedback of the infotaiment team, each process group is a different application (microservice) and it's visible in the kubernetes container/workload
|
||||
within the metadata of each Process Group.
|
||||
|
||||
#### How to create a Process Group Detection Rule
|
||||
1. Open the *conditional-naming-processgroup.yaml* file and create a rule that looks like this:
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
The result of the rule will be renaming the PG to this:
|
||||
```
|
||||
bon-information-prod ipa
|
||||
bon-information-prod rsl
|
||||
```
|
||||
|
||||
Other possible placeholders that you can use are for example:
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName/[^\\-]*$}
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesFullPodName/buffet-(.*?)-}
|
||||
{ProcessGroup:DetectedName} - {HostGroup:Name/[^\\_]*$}
|
||||
{ProcessGroup:KubernetesNamespace}
|
||||
{ProcessGroup:CommandLineArgs/.*?\\-f\\s\\/www\\/(.*?)\\/generated\\/httpd\\.conf.*?}
|
||||
|
||||
You can combine different ones. Check the (documentation)[link] for more
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Process+Group+Naming) page to configure your process groups.
|
||||
|
|
@ -1,35 +1,4 @@
|
|||
### Service Naming Rules
|
||||
|
||||
A typical case could be that you access to *Transaction & Services* and you find two services that are exactly the same:
|
||||
*DataDownloadV1*
|
||||
*DataDownloadV1*
|
||||
### How to configure service naming
|
||||
|
||||
If you drilldown into the service and you check in the process group, you may have a PROD and a E2E for each service.
|
||||
|
||||
*Note: if you see that both process group are exactly the same, please contact a Dynatrace expert to create a Process*
|
||||
*Group detection rule*
|
||||
|
||||
In the case the PG are PROD and E2E, then we need to create a rule that looks like this:
|
||||
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {Service:DetectedName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
|
||||
The rule will get the Service Detected Name (current name) and it will extract (with a regex) the part of the kubernetes namespace after the "-", so -prod or -e2e, resulting in:
|
||||
*DataDownloadV1 - prod*
|
||||
*DataDownloadV1 - e2e*
|
||||
|
||||
Now, services will be easy to identify.
|
||||
|
||||
You can create rules based on any property/metadata. Some other placeholder's eamples:
|
||||
{Service:DatabaseName} - E2E
|
||||
{Service:WebServiceName} - {ProcessGroup:Kubernetes:microservice} - {ProcessGroup:Kubernetes:environment}
|
||||
{Service:DetectedName} - {ProcessGroup:KubernetesContainerName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
{Service:DetectedName} - {ProcessGroup:SpringBootProfileName/[^\\-]*$}
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Service+Naming) page to configure your service naming.
|
||||
|
|
@ -1,13 +1,4 @@
|
|||
## Update dashboard configuration
|
||||
|
||||
- Configuration changes (like in dashboards, alerting profiles) must be done via a pull request. Changing a dashboard just in the environment, will cause that it will be overwritten by Monaco.
|
||||
- How to generate changes in your dashboards?
|
||||
1. Modify the dashboard within the Dynatrace UI with the intended changes.
|
||||
2. Copy the JSON of the dashboards. (Can be found under the dashboard settings)
|
||||
3. Paste the copied JSON under the Monaco JSON, overwrite it.
|
||||
4. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
### How to configure dashboards?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Dashboards) page to configure your dashboards.
|
||||
|
|
@ -1,76 +1,4 @@
|
|||
## Management Zones configuration
|
||||
|
||||
### Excluding noisy services
|
||||
### How to configure management zones?
|
||||
|
||||
*If you find services that are not relevant for the analysis, you can exclude them from the MZ.*
|
||||
|
||||
#### HealthResource, PingResource, PrometheusResource services
|
||||
|
||||
*After the deployment of the OneAgent, your services should appear under Transactions & Services. A good practice would be to exclude*
|
||||
*the ones that are not relevant for monitoring. i.e. For some BMW's teams, HealthResource, PingResource, PrometheusResource have been excluded.*
|
||||
|
||||
**How to exclude HealthResource?**
|
||||
1. Open the file *default.json* configuration under the *CD_<app_name>/management-zone/* folder.
|
||||
2. Copy the following rule template:
|
||||
```
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
```
|
||||
2. Add it under the `"type": "SERVICE"` rule's conditions. It should look like this:
|
||||
```
|
||||
{
|
||||
"conditions": [
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"negate": false,
|
||||
"operator": "EQUALS",
|
||||
"type": "TAG",
|
||||
"value": {
|
||||
"context": "CONTEXTLESS",
|
||||
"key": "Component",
|
||||
"value": "{{.tag}}"
|
||||
},
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_TAGS"
|
||||
}
|
||||
}
|
||||
],
|
||||
"enabled": true,
|
||||
"propagationTypes": [
|
||||
"SERVICE_TO_PROCESS_GROUP_LIKE",
|
||||
"SERVICE_TO_HOST_LIKE"
|
||||
],
|
||||
"type": "SERVICE"
|
||||
}
|
||||
```
|
||||
|
||||
3. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: you can use the same logic to exclude (or include) any other entity to your Management Zone.
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Management+Zones) page to configure your management zones.
|
||||
|
|
@ -1,60 +1,4 @@
|
|||
|
||||
## Configure Notification System
|
||||
### How to configure notification systems?
|
||||
|
||||
### MS Teams - Default
|
||||
|
||||
*Let's suppose you would like to start receiving alerts from Dynatrace via MS Teams just for your *EMEA PROD*.*
|
||||
|
||||
1. Open *notification.yaml* under your application configuration folder. By default, all notification systems are configured via MS Teams with an
|
||||
https://empty webhook (not configured).
|
||||
2. Create an incoming webhook in MS Teams. [How to?](https://www.dynatrace.com/support/help/shortlink/set-up-msteams-integration#configuration-in-microsoft-teams)
|
||||
3. Add the incoming webhook under the webhook parameter for the `<app_name>-PROD.EMEA-Prod`:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: <Add webhook here>
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: If you want to enable MS Teams for any other hub/stage, follow the same steps but make sure you're under the right configuration:
|
||||
`<app_name>-<stage>.<dynatrace-env>-<stage>:`
|
||||
|
||||
### Email
|
||||
|
||||
*The team prefers to be alerted via email, not MS Teams*
|
||||
|
||||
1. Keep the MS Teams integration disabled, with the https://empty webhook:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: https://empty
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
2. Create a new configuration template under config, using the email template:
|
||||
```
|
||||
config:
|
||||
- CD<app_name>email: email.json
|
||||
```
|
||||
3. Describe the configuration below, using the following template:
|
||||
```
|
||||
CD<app_name>email.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- receivers: distributionEmailexample@bmw.de`
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
|
||||
### ITSM
|
||||
Coming soon!
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Problem+Notification+Integrations) page to configure your notification systems.
|
||||
|
|
@ -1,67 +1,4 @@
|
|||
### Process Group Detection Rules and Naming
|
||||
|
||||
#### Detection Rule or Naming?
|
||||
### How to configure process groups?
|
||||
|
||||
For the explanation, we're using a real example of the Infotainment application:
|
||||
|
||||
!(PGNaming1)[../../../../img/PGNaming1.PNG]
|
||||
|
||||
Before working with your dashboards and alerting profiles, an important task to do when working with Dynatrace is checking
|
||||
the structure of your applications (process groups). You can do that clicking under *technologies* and filter using your
|
||||
application Management Zone.
|
||||
|
||||
In the picture above, there are two Process Groups called bon-information-prod. **If you see duplicated process groups like in**
|
||||
**this case, you MUST follow this guideline**
|
||||
|
||||
Next step would be to open both process groups and compare the metadata. In that way, you can identify if all process instances are
|
||||
part of the same application or not. An easy way to do that is asking yourself: how many instances of my application do i have?
|
||||
|
||||
If you have 4 instances in total and you're able to see 2 in one PG and other 2 in other PG it means that **they are part of the **
|
||||
**same application**
|
||||
|
||||
Another situation could be that checking on the metadata, then you see that are **two different application** and Dynatrace is just naming
|
||||
the process group in the same way
|
||||
|
||||
*Same application*
|
||||
- Problem: Dynatrace is creating two different process groups, what transalates in two separated services for the same application. Instead of
|
||||
seeing all the traffic in one service, you will have it splitted and it will complicate your monitoring
|
||||
- Solution: create a process group detection rule. Contact Dynatrace Expert
|
||||
|
||||
*Different application*
|
||||
- Problem: Dynatrace is just naming in the same way applications that are different.
|
||||
- Solution: This case is less severe, since it can be fixed with a process group naming rule.
|
||||
|
||||
|
||||
What about our example?
|
||||
!(PGNaming2)[../../../../img/PGNaming2.PNG]
|
||||
!(PGNaming3)[../../../../img/PGNaming3.PNG]
|
||||
|
||||
Based on the feedback of the infotaiment team, each process group is a different application (microservice) and it's visible in the kubernetes container/workload
|
||||
within the metadata of each Process Group.
|
||||
|
||||
#### How to create a Process Group Detection Rule
|
||||
1. Open the *conditional-naming-processgroup.yaml* file and create a rule that looks like this:
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
The result of the rule will be renaming the PG to this:
|
||||
```
|
||||
bon-information-prod ipa
|
||||
bon-information-prod rsl
|
||||
```
|
||||
|
||||
Other possible placeholders that you can use are for example:
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName/[^\\-]*$}
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesFullPodName/buffet-(.*?)-}
|
||||
{ProcessGroup:DetectedName} - {HostGroup:Name/[^\\_]*$}
|
||||
{ProcessGroup:KubernetesNamespace}
|
||||
{ProcessGroup:CommandLineArgs/.*?\\-f\\s\\/www\\/(.*?)\\/generated\\/httpd\\.conf.*?}
|
||||
|
||||
You can combine different ones. Check the (documentation)[link] for more
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Process+Group+Naming) page to configure your process groups.
|
||||
|
|
@ -1,35 +1,4 @@
|
|||
### Service Naming Rules
|
||||
|
||||
A typical case could be that you access to *Transaction & Services* and you find two services that are exactly the same:
|
||||
*DataDownloadV1*
|
||||
*DataDownloadV1*
|
||||
### How to configure service naming
|
||||
|
||||
If you drilldown into the service and you check in the process group, you may have a PROD and a E2E for each service.
|
||||
|
||||
*Note: if you see that both process group are exactly the same, please contact a Dynatrace expert to create a Process*
|
||||
*Group detection rule*
|
||||
|
||||
In the case the PG are PROD and E2E, then we need to create a rule that looks like this:
|
||||
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {Service:DetectedName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
|
||||
The rule will get the Service Detected Name (current name) and it will extract (with a regex) the part of the kubernetes namespace after the "-", so -prod or -e2e, resulting in:
|
||||
*DataDownloadV1 - prod*
|
||||
*DataDownloadV1 - e2e*
|
||||
|
||||
Now, services will be easy to identify.
|
||||
|
||||
You can create rules based on any property/metadata. Some other placeholder's eamples:
|
||||
{Service:DatabaseName} - E2E
|
||||
{Service:WebServiceName} - {ProcessGroup:Kubernetes:microservice} - {ProcessGroup:Kubernetes:environment}
|
||||
{Service:DetectedName} - {ProcessGroup:KubernetesContainerName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
{Service:DetectedName} - {ProcessGroup:SpringBootProfileName/[^\\-]*$}
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Service+Naming) page to configure your service naming.
|
||||
|
|
@ -1,13 +1,4 @@
|
|||
## Update dashboard configuration
|
||||
|
||||
- Configuration changes (like in dashboards, alerting profiles) must be done via a pull request. Changing a dashboard just in the environment, will cause that it will be overwritten by Monaco.
|
||||
- How to generate changes in your dashboards?
|
||||
1. Modify the dashboard within the Dynatrace UI with the intended changes.
|
||||
2. Copy the JSON of the dashboards. (Can be found under the dashboard settings)
|
||||
3. Paste the copied JSON under the Monaco JSON, overwrite it.
|
||||
4. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
### How to configure dashboards?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Dashboards) page to configure your dashboards.
|
||||
|
|
@ -1,76 +1,4 @@
|
|||
## Management Zones configuration
|
||||
|
||||
### Excluding noisy services
|
||||
### How to configure management zones?
|
||||
|
||||
*If you find services that are not relevant for the analysis, you can exclude them from the MZ.*
|
||||
|
||||
#### HealthResource, PingResource, PrometheusResource services
|
||||
|
||||
*After the deployment of the OneAgent, your services should appear under Transactions & Services. A good practice would be to exclude*
|
||||
*the ones that are not relevant for monitoring. i.e. For some BMW's teams, HealthResource, PingResource, PrometheusResource have been excluded.*
|
||||
|
||||
**How to exclude HealthResource?**
|
||||
1. Open the file *default.json* configuration under the *CD_<app_name>/management-zone/* folder.
|
||||
2. Copy the following rule template:
|
||||
```
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
```
|
||||
2. Add it under the `"type": "SERVICE"` rule's conditions. It should look like this:
|
||||
```
|
||||
{
|
||||
"conditions": [
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"negate": false,
|
||||
"operator": "EQUALS",
|
||||
"type": "TAG",
|
||||
"value": {
|
||||
"context": "CONTEXTLESS",
|
||||
"key": "Component",
|
||||
"value": "{{.tag}}"
|
||||
},
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_TAGS"
|
||||
}
|
||||
}
|
||||
],
|
||||
"enabled": true,
|
||||
"propagationTypes": [
|
||||
"SERVICE_TO_PROCESS_GROUP_LIKE",
|
||||
"SERVICE_TO_HOST_LIKE"
|
||||
],
|
||||
"type": "SERVICE"
|
||||
}
|
||||
```
|
||||
|
||||
3. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: you can use the same logic to exclude (or include) any other entity to your Management Zone.
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Management+Zones) page to configure your management zones.
|
||||
|
|
@ -1,60 +1,4 @@
|
|||
|
||||
## Configure Notification System
|
||||
### How to configure notification systems?
|
||||
|
||||
### MS Teams - Default
|
||||
|
||||
*Let's suppose you would like to start receiving alerts from Dynatrace via MS Teams just for your *EMEA PROD*.*
|
||||
|
||||
1. Open *notification.yaml* under your application configuration folder. By default, all notification systems are configured via MS Teams with an
|
||||
https://empty webhook (not configured).
|
||||
2. Create an incoming webhook in MS Teams. [How to?](https://www.dynatrace.com/support/help/shortlink/set-up-msteams-integration#configuration-in-microsoft-teams)
|
||||
3. Add the incoming webhook under the webhook parameter for the `<app_name>-PROD.EMEA-Prod`:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: <Add webhook here>
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: If you want to enable MS Teams for any other hub/stage, follow the same steps but make sure you're under the right configuration:
|
||||
`<app_name>-<stage>.<dynatrace-env>-<stage>:`
|
||||
|
||||
### Email
|
||||
|
||||
*The team prefers to be alerted via email, not MS Teams*
|
||||
|
||||
1. Keep the MS Teams integration disabled, with the https://empty webhook:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: https://empty
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
2. Create a new configuration template under config, using the email template:
|
||||
```
|
||||
config:
|
||||
- CD<app_name>email: email.json
|
||||
```
|
||||
3. Describe the configuration below, using the following template:
|
||||
```
|
||||
CD<app_name>email.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- receivers: distributionEmailexample@bmw.de`
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
|
||||
### ITSM
|
||||
Coming soon!
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Problem+Notification+Integrations) page to configure your notification systems.
|
||||
|
|
@ -1,67 +1,4 @@
|
|||
### Process Group Detection Rules and Naming
|
||||
|
||||
#### Detection Rule or Naming?
|
||||
### How to configure process groups?
|
||||
|
||||
For the explanation, we're using a real example of the Infotainment application:
|
||||
|
||||
!(PGNaming1)[../../../../img/PGNaming1.PNG]
|
||||
|
||||
Before working with your dashboards and alerting profiles, an important task to do when working with Dynatrace is checking
|
||||
the structure of your applications (process groups). You can do that clicking under *technologies* and filter using your
|
||||
application Management Zone.
|
||||
|
||||
In the picture above, there are two Process Groups called bon-information-prod. **If you see duplicated process groups like in**
|
||||
**this case, you MUST follow this guideline**
|
||||
|
||||
Next step would be to open both process groups and compare the metadata. In that way, you can identify if all process instances are
|
||||
part of the same application or not. An easy way to do that is asking yourself: how many instances of my application do i have?
|
||||
|
||||
If you have 4 instances in total and you're able to see 2 in one PG and other 2 in other PG it means that **they are part of the **
|
||||
**same application**
|
||||
|
||||
Another situation could be that checking on the metadata, then you see that are **two different application** and Dynatrace is just naming
|
||||
the process group in the same way
|
||||
|
||||
*Same application*
|
||||
- Problem: Dynatrace is creating two different process groups, what transalates in two separated services for the same application. Instead of
|
||||
seeing all the traffic in one service, you will have it splitted and it will complicate your monitoring
|
||||
- Solution: create a process group detection rule. Contact Dynatrace Expert
|
||||
|
||||
*Different application*
|
||||
- Problem: Dynatrace is just naming in the same way applications that are different.
|
||||
- Solution: This case is less severe, since it can be fixed with a process group naming rule.
|
||||
|
||||
|
||||
What about our example?
|
||||
!(PGNaming2)[../../../../img/PGNaming2.PNG]
|
||||
!(PGNaming3)[../../../../img/PGNaming3.PNG]
|
||||
|
||||
Based on the feedback of the infotaiment team, each process group is a different application (microservice) and it's visible in the kubernetes container/workload
|
||||
within the metadata of each Process Group.
|
||||
|
||||
#### How to create a Process Group Detection Rule
|
||||
1. Open the *conditional-naming-processgroup.yaml* file and create a rule that looks like this:
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
The result of the rule will be renaming the PG to this:
|
||||
```
|
||||
bon-information-prod ipa
|
||||
bon-information-prod rsl
|
||||
```
|
||||
|
||||
Other possible placeholders that you can use are for example:
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName/[^\\-]*$}
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesFullPodName/buffet-(.*?)-}
|
||||
{ProcessGroup:DetectedName} - {HostGroup:Name/[^\\_]*$}
|
||||
{ProcessGroup:KubernetesNamespace}
|
||||
{ProcessGroup:CommandLineArgs/.*?\\-f\\s\\/www\\/(.*?)\\/generated\\/httpd\\.conf.*?}
|
||||
|
||||
You can combine different ones. Check the (documentation)[link] for more
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Process+Group+Naming) page to configure your process groups.
|
||||
|
|
@ -1,35 +1,4 @@
|
|||
### Service Naming Rules
|
||||
|
||||
A typical case could be that you access to *Transaction & Services* and you find two services that are exactly the same:
|
||||
*DataDownloadV1*
|
||||
*DataDownloadV1*
|
||||
### How to configure service naming
|
||||
|
||||
If you drilldown into the service and you check in the process group, you may have a PROD and a E2E for each service.
|
||||
|
||||
*Note: if you see that both process group are exactly the same, please contact a Dynatrace expert to create a Process*
|
||||
*Group detection rule*
|
||||
|
||||
In the case the PG are PROD and E2E, then we need to create a rule that looks like this:
|
||||
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {Service:DetectedName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
|
||||
The rule will get the Service Detected Name (current name) and it will extract (with a regex) the part of the kubernetes namespace after the "-", so -prod or -e2e, resulting in:
|
||||
*DataDownloadV1 - prod*
|
||||
*DataDownloadV1 - e2e*
|
||||
|
||||
Now, services will be easy to identify.
|
||||
|
||||
You can create rules based on any property/metadata. Some other placeholder's eamples:
|
||||
{Service:DatabaseName} - E2E
|
||||
{Service:WebServiceName} - {ProcessGroup:Kubernetes:microservice} - {ProcessGroup:Kubernetes:environment}
|
||||
{Service:DetectedName} - {ProcessGroup:KubernetesContainerName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
{Service:DetectedName} - {ProcessGroup:SpringBootProfileName/[^\\-]*$}
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Service+Naming) page to configure your service naming.
|
||||
|
|
@ -1,13 +1,4 @@
|
|||
## Update dashboard configuration
|
||||
|
||||
- Configuration changes (like in dashboards, alerting profiles) must be done via a pull request. Changing a dashboard just in the environment, will cause that it will be overwritten by Monaco.
|
||||
- How to generate changes in your dashboards?
|
||||
1. Modify the dashboard within the Dynatrace UI with the intended changes.
|
||||
2. Copy the JSON of the dashboards. (Can be found under the dashboard settings)
|
||||
3. Paste the copied JSON under the Monaco JSON, overwrite it.
|
||||
4. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
### How to configure dashboards?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Dashboards) page to configure your dashboards.
|
||||
|
|
@ -1,76 +1,4 @@
|
|||
## Management Zones configuration
|
||||
|
||||
### Excluding noisy services
|
||||
### How to configure management zones?
|
||||
|
||||
*If you find services that are not relevant for the analysis, you can exclude them from the MZ.*
|
||||
|
||||
#### HealthResource, PingResource, PrometheusResource services
|
||||
|
||||
*After the deployment of the OneAgent, your services should appear under Transactions & Services. A good practice would be to exclude*
|
||||
*the ones that are not relevant for monitoring. i.e. For some BMW's teams, HealthResource, PingResource, PrometheusResource have been excluded.*
|
||||
|
||||
**How to exclude HealthResource?**
|
||||
1. Open the file *default.json* configuration under the *CD_<app_name>/management-zone/* folder.
|
||||
2. Copy the following rule template:
|
||||
```
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
```
|
||||
2. Add it under the `"type": "SERVICE"` rule's conditions. It should look like this:
|
||||
```
|
||||
{
|
||||
"conditions": [
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"negate": false,
|
||||
"operator": "EQUALS",
|
||||
"type": "TAG",
|
||||
"value": {
|
||||
"context": "CONTEXTLESS",
|
||||
"key": "Component",
|
||||
"value": "{{.tag}}"
|
||||
},
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_TAGS"
|
||||
}
|
||||
}
|
||||
],
|
||||
"enabled": true,
|
||||
"propagationTypes": [
|
||||
"SERVICE_TO_PROCESS_GROUP_LIKE",
|
||||
"SERVICE_TO_HOST_LIKE"
|
||||
],
|
||||
"type": "SERVICE"
|
||||
}
|
||||
```
|
||||
|
||||
3. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: you can use the same logic to exclude (or include) any other entity to your Management Zone.
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Management+Zones) page to configure your management zones.
|
||||
|
|
@ -1,60 +1,4 @@
|
|||
|
||||
## Configure Notification System
|
||||
### How to configure notification systems?
|
||||
|
||||
### MS Teams - Default
|
||||
|
||||
*Let's suppose you would like to start receiving alerts from Dynatrace via MS Teams just for your *EMEA PROD*.*
|
||||
|
||||
1. Open *notification.yaml* under your application configuration folder. By default, all notification systems are configured via MS Teams with an
|
||||
https://empty webhook (not configured).
|
||||
2. Create an incoming webhook in MS Teams. [How to?](https://www.dynatrace.com/support/help/shortlink/set-up-msteams-integration#configuration-in-microsoft-teams)
|
||||
3. Add the incoming webhook under the webhook parameter for the `<app_name>-PROD.EMEA-Prod`:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: <Add webhook here>
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: If you want to enable MS Teams for any other hub/stage, follow the same steps but make sure you're under the right configuration:
|
||||
`<app_name>-<stage>.<dynatrace-env>-<stage>:`
|
||||
|
||||
### Email
|
||||
|
||||
*The team prefers to be alerted via email, not MS Teams*
|
||||
|
||||
1. Keep the MS Teams integration disabled, with the https://empty webhook:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: https://empty
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
2. Create a new configuration template under config, using the email template:
|
||||
```
|
||||
config:
|
||||
- CD<app_name>email: email.json
|
||||
```
|
||||
3. Describe the configuration below, using the following template:
|
||||
```
|
||||
CD<app_name>email.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- receivers: distributionEmailexample@bmw.de`
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
|
||||
### ITSM
|
||||
Coming soon!
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Problem+Notification+Integrations) page to configure your notification systems.
|
||||
|
|
@ -1,67 +1,4 @@
|
|||
### Process Group Detection Rules and Naming
|
||||
|
||||
#### Detection Rule or Naming?
|
||||
### How to configure process groups?
|
||||
|
||||
For the explanation, we're using a real example of the Infotainment application:
|
||||
|
||||
!(PGNaming1)[../../../../img/PGNaming1.PNG]
|
||||
|
||||
Before working with your dashboards and alerting profiles, an important task to do when working with Dynatrace is checking
|
||||
the structure of your applications (process groups). You can do that clicking under *technologies* and filter using your
|
||||
application Management Zone.
|
||||
|
||||
In the picture above, there are two Process Groups called bon-information-prod. **If you see duplicated process groups like in**
|
||||
**this case, you MUST follow this guideline**
|
||||
|
||||
Next step would be to open both process groups and compare the metadata. In that way, you can identify if all process instances are
|
||||
part of the same application or not. An easy way to do that is asking yourself: how many instances of my application do i have?
|
||||
|
||||
If you have 4 instances in total and you're able to see 2 in one PG and other 2 in other PG it means that **they are part of the **
|
||||
**same application**
|
||||
|
||||
Another situation could be that checking on the metadata, then you see that are **two different application** and Dynatrace is just naming
|
||||
the process group in the same way
|
||||
|
||||
*Same application*
|
||||
- Problem: Dynatrace is creating two different process groups, what transalates in two separated services for the same application. Instead of
|
||||
seeing all the traffic in one service, you will have it splitted and it will complicate your monitoring
|
||||
- Solution: create a process group detection rule. Contact Dynatrace Expert
|
||||
|
||||
*Different application*
|
||||
- Problem: Dynatrace is just naming in the same way applications that are different.
|
||||
- Solution: This case is less severe, since it can be fixed with a process group naming rule.
|
||||
|
||||
|
||||
What about our example?
|
||||
!(PGNaming2)[../../../../img/PGNaming2.PNG]
|
||||
!(PGNaming3)[../../../../img/PGNaming3.PNG]
|
||||
|
||||
Based on the feedback of the infotaiment team, each process group is a different application (microservice) and it's visible in the kubernetes container/workload
|
||||
within the metadata of each Process Group.
|
||||
|
||||
#### How to create a Process Group Detection Rule
|
||||
1. Open the *conditional-naming-processgroup.yaml* file and create a rule that looks like this:
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
The result of the rule will be renaming the PG to this:
|
||||
```
|
||||
bon-information-prod ipa
|
||||
bon-information-prod rsl
|
||||
```
|
||||
|
||||
Other possible placeholders that you can use are for example:
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName/[^\\-]*$}
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesFullPodName/buffet-(.*?)-}
|
||||
{ProcessGroup:DetectedName} - {HostGroup:Name/[^\\_]*$}
|
||||
{ProcessGroup:KubernetesNamespace}
|
||||
{ProcessGroup:CommandLineArgs/.*?\\-f\\s\\/www\\/(.*?)\\/generated\\/httpd\\.conf.*?}
|
||||
|
||||
You can combine different ones. Check the (documentation)[link] for more
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Process+Group+Naming) page to configure your process groups.
|
||||
|
|
@ -1,35 +1,4 @@
|
|||
### Service Naming Rules
|
||||
|
||||
A typical case could be that you access to *Transaction & Services* and you find two services that are exactly the same:
|
||||
*DataDownloadV1*
|
||||
*DataDownloadV1*
|
||||
### How to configure service naming
|
||||
|
||||
If you drilldown into the service and you check in the process group, you may have a PROD and a E2E for each service.
|
||||
|
||||
*Note: if you see that both process group are exactly the same, please contact a Dynatrace expert to create a Process*
|
||||
*Group detection rule*
|
||||
|
||||
In the case the PG are PROD and E2E, then we need to create a rule that looks like this:
|
||||
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {Service:DetectedName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
|
||||
The rule will get the Service Detected Name (current name) and it will extract (with a regex) the part of the kubernetes namespace after the "-", so -prod or -e2e, resulting in:
|
||||
*DataDownloadV1 - prod*
|
||||
*DataDownloadV1 - e2e*
|
||||
|
||||
Now, services will be easy to identify.
|
||||
|
||||
You can create rules based on any property/metadata. Some other placeholder's eamples:
|
||||
{Service:DatabaseName} - E2E
|
||||
{Service:WebServiceName} - {ProcessGroup:Kubernetes:microservice} - {ProcessGroup:Kubernetes:environment}
|
||||
{Service:DetectedName} - {ProcessGroup:KubernetesContainerName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
{Service:DetectedName} - {ProcessGroup:SpringBootProfileName/[^\\-]*$}
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Service+Naming) page to configure your service naming.
|
||||
|
|
@ -1,13 +1,4 @@
|
|||
## Update dashboard configuration
|
||||
|
||||
- Configuration changes (like in dashboards, alerting profiles) must be done via a pull request. Changing a dashboard just in the environment, will cause that it will be overwritten by Monaco.
|
||||
- How to generate changes in your dashboards?
|
||||
1. Modify the dashboard within the Dynatrace UI with the intended changes.
|
||||
2. Copy the JSON of the dashboards. (Can be found under the dashboard settings)
|
||||
3. Paste the copied JSON under the Monaco JSON, overwrite it.
|
||||
4. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
### How to configure dashboards?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Dashboards) page to configure your dashboards.
|
||||
|
|
@ -1,76 +1,4 @@
|
|||
## Management Zones configuration
|
||||
|
||||
### Excluding noisy services
|
||||
### How to configure management zones?
|
||||
|
||||
*If you find services that are not relevant for the analysis, you can exclude them from the MZ.*
|
||||
|
||||
#### HealthResource, PingResource, PrometheusResource services
|
||||
|
||||
*After the deployment of the OneAgent, your services should appear under Transactions & Services. A good practice would be to exclude*
|
||||
*the ones that are not relevant for monitoring. i.e. For some BMW's teams, HealthResource, PingResource, PrometheusResource have been excluded.*
|
||||
|
||||
**How to exclude HealthResource?**
|
||||
1. Open the file *default.json* configuration under the *CD_<app_name>/management-zone/* folder.
|
||||
2. Copy the following rule template:
|
||||
```
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
```
|
||||
2. Add it under the `"type": "SERVICE"` rule's conditions. It should look like this:
|
||||
```
|
||||
{
|
||||
"conditions": [
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"negate": false,
|
||||
"operator": "EQUALS",
|
||||
"type": "TAG",
|
||||
"value": {
|
||||
"context": "CONTEXTLESS",
|
||||
"key": "Component",
|
||||
"value": "{{.tag}}"
|
||||
},
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_TAGS"
|
||||
}
|
||||
}
|
||||
],
|
||||
"enabled": true,
|
||||
"propagationTypes": [
|
||||
"SERVICE_TO_PROCESS_GROUP_LIKE",
|
||||
"SERVICE_TO_HOST_LIKE"
|
||||
],
|
||||
"type": "SERVICE"
|
||||
}
|
||||
```
|
||||
|
||||
3. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: you can use the same logic to exclude (or include) any other entity to your Management Zone.
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Management+Zones) page to configure your management zones.
|
||||
|
|
@ -1,60 +1,4 @@
|
|||
|
||||
## Configure Notification System
|
||||
### How to configure notification systems?
|
||||
|
||||
### MS Teams - Default
|
||||
|
||||
*Let's suppose you would like to start receiving alerts from Dynatrace via MS Teams just for your *EMEA PROD*.*
|
||||
|
||||
1. Open *notification.yaml* under your application configuration folder. By default, all notification systems are configured via MS Teams with an
|
||||
https://empty webhook (not configured).
|
||||
2. Create an incoming webhook in MS Teams. [How to?](https://www.dynatrace.com/support/help/shortlink/set-up-msteams-integration#configuration-in-microsoft-teams)
|
||||
3. Add the incoming webhook under the webhook parameter for the `<app_name>-PROD.EMEA-Prod`:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: <Add webhook here>
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: If you want to enable MS Teams for any other hub/stage, follow the same steps but make sure you're under the right configuration:
|
||||
`<app_name>-<stage>.<dynatrace-env>-<stage>:`
|
||||
|
||||
### Email
|
||||
|
||||
*The team prefers to be alerted via email, not MS Teams*
|
||||
|
||||
1. Keep the MS Teams integration disabled, with the https://empty webhook:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: https://empty
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
2. Create a new configuration template under config, using the email template:
|
||||
```
|
||||
config:
|
||||
- CD<app_name>email: email.json
|
||||
```
|
||||
3. Describe the configuration below, using the following template:
|
||||
```
|
||||
CD<app_name>email.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- receivers: distributionEmailexample@bmw.de`
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
|
||||
### ITSM
|
||||
Coming soon!
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Problem+Notification+Integrations) page to configure your notification systems.
|
||||
|
|
@ -1,67 +1,4 @@
|
|||
### Process Group Detection Rules and Naming
|
||||
|
||||
#### Detection Rule or Naming?
|
||||
### How to configure process groups?
|
||||
|
||||
For the explanation, we're using a real example of the Infotainment application:
|
||||
|
||||
!(PGNaming1)[../../../../img/PGNaming1.PNG]
|
||||
|
||||
Before working with your dashboards and alerting profiles, an important task to do when working with Dynatrace is checking
|
||||
the structure of your applications (process groups). You can do that clicking under *technologies* and filter using your
|
||||
application Management Zone.
|
||||
|
||||
In the picture above, there are two Process Groups called bon-information-prod. **If you see duplicated process groups like in**
|
||||
**this case, you MUST follow this guideline**
|
||||
|
||||
Next step would be to open both process groups and compare the metadata. In that way, you can identify if all process instances are
|
||||
part of the same application or not. An easy way to do that is asking yourself: how many instances of my application do i have?
|
||||
|
||||
If you have 4 instances in total and you're able to see 2 in one PG and other 2 in other PG it means that **they are part of the **
|
||||
**same application**
|
||||
|
||||
Another situation could be that checking on the metadata, then you see that are **two different application** and Dynatrace is just naming
|
||||
the process group in the same way
|
||||
|
||||
*Same application*
|
||||
- Problem: Dynatrace is creating two different process groups, what transalates in two separated services for the same application. Instead of
|
||||
seeing all the traffic in one service, you will have it splitted and it will complicate your monitoring
|
||||
- Solution: create a process group detection rule. Contact Dynatrace Expert
|
||||
|
||||
*Different application*
|
||||
- Problem: Dynatrace is just naming in the same way applications that are different.
|
||||
- Solution: This case is less severe, since it can be fixed with a process group naming rule.
|
||||
|
||||
|
||||
What about our example?
|
||||
!(PGNaming2)[../../../../img/PGNaming2.PNG]
|
||||
!(PGNaming3)[../../../../img/PGNaming3.PNG]
|
||||
|
||||
Based on the feedback of the infotaiment team, each process group is a different application (microservice) and it's visible in the kubernetes container/workload
|
||||
within the metadata of each Process Group.
|
||||
|
||||
#### How to create a Process Group Detection Rule
|
||||
1. Open the *conditional-naming-processgroup.yaml* file and create a rule that looks like this:
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
The result of the rule will be renaming the PG to this:
|
||||
```
|
||||
bon-information-prod ipa
|
||||
bon-information-prod rsl
|
||||
```
|
||||
|
||||
Other possible placeholders that you can use are for example:
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName/[^\\-]*$}
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesFullPodName/buffet-(.*?)-}
|
||||
{ProcessGroup:DetectedName} - {HostGroup:Name/[^\\_]*$}
|
||||
{ProcessGroup:KubernetesNamespace}
|
||||
{ProcessGroup:CommandLineArgs/.*?\\-f\\s\\/www\\/(.*?)\\/generated\\/httpd\\.conf.*?}
|
||||
|
||||
You can combine different ones. Check the (documentation)[link] for more
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Process+Group+Naming) page to configure your process groups.
|
||||
|
|
@ -1,35 +1,4 @@
|
|||
### Service Naming Rules
|
||||
|
||||
A typical case could be that you access to *Transaction & Services* and you find two services that are exactly the same:
|
||||
*DataDownloadV1*
|
||||
*DataDownloadV1*
|
||||
### How to configure service naming
|
||||
|
||||
If you drilldown into the service and you check in the process group, you may have a PROD and a E2E for each service.
|
||||
|
||||
*Note: if you see that both process group are exactly the same, please contact a Dynatrace expert to create a Process*
|
||||
*Group detection rule*
|
||||
|
||||
In the case the PG are PROD and E2E, then we need to create a rule that looks like this:
|
||||
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {Service:DetectedName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
|
||||
The rule will get the Service Detected Name (current name) and it will extract (with a regex) the part of the kubernetes namespace after the "-", so -prod or -e2e, resulting in:
|
||||
*DataDownloadV1 - prod*
|
||||
*DataDownloadV1 - e2e*
|
||||
|
||||
Now, services will be easy to identify.
|
||||
|
||||
You can create rules based on any property/metadata. Some other placeholder's eamples:
|
||||
{Service:DatabaseName} - E2E
|
||||
{Service:WebServiceName} - {ProcessGroup:Kubernetes:microservice} - {ProcessGroup:Kubernetes:environment}
|
||||
{Service:DetectedName} - {ProcessGroup:KubernetesContainerName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
{Service:DetectedName} - {ProcessGroup:SpringBootProfileName/[^\\-]*$}
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Service+Naming) page to configure your service naming.
|
||||
|
|
@ -1,13 +1,4 @@
|
|||
## Update dashboard configuration
|
||||
|
||||
- Configuration changes (like in dashboards, alerting profiles) must be done via a pull request. Changing a dashboard just in the environment, will cause that it will be overwritten by Monaco.
|
||||
- How to generate changes in your dashboards?
|
||||
1. Modify the dashboard within the Dynatrace UI with the intended changes.
|
||||
2. Copy the JSON of the dashboards. (Can be found under the dashboard settings)
|
||||
3. Paste the copied JSON under the Monaco JSON, overwrite it.
|
||||
4. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
### How to configure dashboards?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Dashboards) page to configure your dashboards.
|
||||
|
|
@ -1,76 +1,4 @@
|
|||
## Management Zones configuration
|
||||
|
||||
### Excluding noisy services
|
||||
### How to configure management zones?
|
||||
|
||||
*If you find services that are not relevant for the analysis, you can exclude them from the MZ.*
|
||||
|
||||
#### HealthResource, PingResource, PrometheusResource services
|
||||
|
||||
*After the deployment of the OneAgent, your services should appear under Transactions & Services. A good practice would be to exclude*
|
||||
*the ones that are not relevant for monitoring. i.e. For some BMW's teams, HealthResource, PingResource, PrometheusResource have been excluded.*
|
||||
|
||||
**How to exclude HealthResource?**
|
||||
1. Open the file *default.json* configuration under the *CD_<app_name>/management-zone/* folder.
|
||||
2. Copy the following rule template:
|
||||
```
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
```
|
||||
2. Add it under the `"type": "SERVICE"` rule's conditions. It should look like this:
|
||||
```
|
||||
{
|
||||
"conditions": [
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"negate": false,
|
||||
"operator": "EQUALS",
|
||||
"type": "TAG",
|
||||
"value": {
|
||||
"context": "CONTEXTLESS",
|
||||
"key": "Component",
|
||||
"value": "{{.tag}}"
|
||||
},
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_TAGS"
|
||||
}
|
||||
}
|
||||
],
|
||||
"enabled": true,
|
||||
"propagationTypes": [
|
||||
"SERVICE_TO_PROCESS_GROUP_LIKE",
|
||||
"SERVICE_TO_HOST_LIKE"
|
||||
],
|
||||
"type": "SERVICE"
|
||||
}
|
||||
```
|
||||
|
||||
3. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: you can use the same logic to exclude (or include) any other entity to your Management Zone.
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Management+Zones) page to configure your management zones.
|
||||
|
|
@ -1,60 +1,4 @@
|
|||
|
||||
## Configure Notification System
|
||||
### How to configure notification systems?
|
||||
|
||||
### MS Teams - Default
|
||||
|
||||
*Let's suppose you would like to start receiving alerts from Dynatrace via MS Teams just for your *EMEA PROD*.*
|
||||
|
||||
1. Open *notification.yaml* under your application configuration folder. By default, all notification systems are configured via MS Teams with an
|
||||
https://empty webhook (not configured).
|
||||
2. Create an incoming webhook in MS Teams. [How to?](https://www.dynatrace.com/support/help/shortlink/set-up-msteams-integration#configuration-in-microsoft-teams)
|
||||
3. Add the incoming webhook under the webhook parameter for the `<app_name>-PROD.EMEA-Prod`:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: <Add webhook here>
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: If you want to enable MS Teams for any other hub/stage, follow the same steps but make sure you're under the right configuration:
|
||||
`<app_name>-<stage>.<dynatrace-env>-<stage>:`
|
||||
|
||||
### Email
|
||||
|
||||
*The team prefers to be alerted via email, not MS Teams*
|
||||
|
||||
1. Keep the MS Teams integration disabled, with the https://empty webhook:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: https://empty
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
2. Create a new configuration template under config, using the email template:
|
||||
```
|
||||
config:
|
||||
- CD<app_name>email: email.json
|
||||
```
|
||||
3. Describe the configuration below, using the following template:
|
||||
```
|
||||
CD<app_name>email.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- receivers: distributionEmailexample@bmw.de`
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
|
||||
### ITSM
|
||||
Coming soon!
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Problem+Notification+Integrations) page to configure your notification systems.
|
||||
|
|
@ -1,67 +1,4 @@
|
|||
### Process Group Detection Rules and Naming
|
||||
|
||||
#### Detection Rule or Naming?
|
||||
### How to configure process groups?
|
||||
|
||||
For the explanation, we're using a real example of the Infotainment application:
|
||||
|
||||
!(PGNaming1)[../../../../img/PGNaming1.PNG]
|
||||
|
||||
Before working with your dashboards and alerting profiles, an important task to do when working with Dynatrace is checking
|
||||
the structure of your applications (process groups). You can do that clicking under *technologies* and filter using your
|
||||
application Management Zone.
|
||||
|
||||
In the picture above, there are two Process Groups called bon-information-prod. **If you see duplicated process groups like in**
|
||||
**this case, you MUST follow this guideline**
|
||||
|
||||
Next step would be to open both process groups and compare the metadata. In that way, you can identify if all process instances are
|
||||
part of the same application or not. An easy way to do that is asking yourself: how many instances of my application do i have?
|
||||
|
||||
If you have 4 instances in total and you're able to see 2 in one PG and other 2 in other PG it means that **they are part of the **
|
||||
**same application**
|
||||
|
||||
Another situation could be that checking on the metadata, then you see that are **two different application** and Dynatrace is just naming
|
||||
the process group in the same way
|
||||
|
||||
*Same application*
|
||||
- Problem: Dynatrace is creating two different process groups, what transalates in two separated services for the same application. Instead of
|
||||
seeing all the traffic in one service, you will have it splitted and it will complicate your monitoring
|
||||
- Solution: create a process group detection rule. Contact Dynatrace Expert
|
||||
|
||||
*Different application*
|
||||
- Problem: Dynatrace is just naming in the same way applications that are different.
|
||||
- Solution: This case is less severe, since it can be fixed with a process group naming rule.
|
||||
|
||||
|
||||
What about our example?
|
||||
!(PGNaming2)[../../../../img/PGNaming2.PNG]
|
||||
!(PGNaming3)[../../../../img/PGNaming3.PNG]
|
||||
|
||||
Based on the feedback of the infotaiment team, each process group is a different application (microservice) and it's visible in the kubernetes container/workload
|
||||
within the metadata of each Process Group.
|
||||
|
||||
#### How to create a Process Group Detection Rule
|
||||
1. Open the *conditional-naming-processgroup.yaml* file and create a rule that looks like this:
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
The result of the rule will be renaming the PG to this:
|
||||
```
|
||||
bon-information-prod ipa
|
||||
bon-information-prod rsl
|
||||
```
|
||||
|
||||
Other possible placeholders that you can use are for example:
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName/[^\\-]*$}
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesFullPodName/buffet-(.*?)-}
|
||||
{ProcessGroup:DetectedName} - {HostGroup:Name/[^\\_]*$}
|
||||
{ProcessGroup:KubernetesNamespace}
|
||||
{ProcessGroup:CommandLineArgs/.*?\\-f\\s\\/www\\/(.*?)\\/generated\\/httpd\\.conf.*?}
|
||||
|
||||
You can combine different ones. Check the (documentation)[link] for more
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Process+Group+Naming) page to configure your process groups.
|
||||
|
|
@ -1,35 +1,4 @@
|
|||
### Service Naming Rules
|
||||
|
||||
A typical case could be that you access to *Transaction & Services* and you find two services that are exactly the same:
|
||||
*DataDownloadV1*
|
||||
*DataDownloadV1*
|
||||
### How to configure service naming
|
||||
|
||||
If you drilldown into the service and you check in the process group, you may have a PROD and a E2E for each service.
|
||||
|
||||
*Note: if you see that both process group are exactly the same, please contact a Dynatrace expert to create a Process*
|
||||
*Group detection rule*
|
||||
|
||||
In the case the PG are PROD and E2E, then we need to create a rule that looks like this:
|
||||
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {Service:DetectedName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
|
||||
The rule will get the Service Detected Name (current name) and it will extract (with a regex) the part of the kubernetes namespace after the "-", so -prod or -e2e, resulting in:
|
||||
*DataDownloadV1 - prod*
|
||||
*DataDownloadV1 - e2e*
|
||||
|
||||
Now, services will be easy to identify.
|
||||
|
||||
You can create rules based on any property/metadata. Some other placeholder's eamples:
|
||||
{Service:DatabaseName} - E2E
|
||||
{Service:WebServiceName} - {ProcessGroup:Kubernetes:microservice} - {ProcessGroup:Kubernetes:environment}
|
||||
{Service:DetectedName} - {ProcessGroup:KubernetesContainerName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
{Service:DetectedName} - {ProcessGroup:SpringBootProfileName/[^\\-]*$}
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Service+Naming) page to configure your service naming.
|
||||
|
|
@ -1,13 +1,4 @@
|
|||
## Update dashboard configuration
|
||||
|
||||
- Configuration changes (like in dashboards, alerting profiles) must be done via a pull request. Changing a dashboard just in the environment, will cause that it will be overwritten by Monaco.
|
||||
- How to generate changes in your dashboards?
|
||||
1. Modify the dashboard within the Dynatrace UI with the intended changes.
|
||||
2. Copy the JSON of the dashboards. (Can be found under the dashboard settings)
|
||||
3. Paste the copied JSON under the Monaco JSON, overwrite it.
|
||||
4. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
### How to configure dashboards?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Dashboards) page to configure your dashboards.
|
||||
|
|
@ -1,76 +1,4 @@
|
|||
## Management Zones configuration
|
||||
|
||||
### Excluding noisy services
|
||||
### How to configure management zones?
|
||||
|
||||
*If you find services that are not relevant for the analysis, you can exclude them from the MZ.*
|
||||
|
||||
#### HealthResource, PingResource, PrometheusResource services
|
||||
|
||||
*After the deployment of the OneAgent, your services should appear under Transactions & Services. A good practice would be to exclude*
|
||||
*the ones that are not relevant for monitoring. i.e. For some BMW's teams, HealthResource, PingResource, PrometheusResource have been excluded.*
|
||||
|
||||
**How to exclude HealthResource?**
|
||||
1. Open the file *default.json* configuration under the *CD_<app_name>/management-zone/* folder.
|
||||
2. Copy the following rule template:
|
||||
```
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
```
|
||||
2. Add it under the `"type": "SERVICE"` rule's conditions. It should look like this:
|
||||
```
|
||||
{
|
||||
"conditions": [
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"negate": false,
|
||||
"operator": "EQUALS",
|
||||
"type": "TAG",
|
||||
"value": {
|
||||
"context": "CONTEXTLESS",
|
||||
"key": "Component",
|
||||
"value": "{{.tag}}"
|
||||
},
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_TAGS"
|
||||
}
|
||||
}
|
||||
],
|
||||
"enabled": true,
|
||||
"propagationTypes": [
|
||||
"SERVICE_TO_PROCESS_GROUP_LIKE",
|
||||
"SERVICE_TO_HOST_LIKE"
|
||||
],
|
||||
"type": "SERVICE"
|
||||
}
|
||||
```
|
||||
|
||||
3. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: you can use the same logic to exclude (or include) any other entity to your Management Zone.
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Management+Zones) page to configure your management zones.
|
||||
|
|
@ -1,60 +1,4 @@
|
|||
|
||||
## Configure Notification System
|
||||
### How to configure notification systems?
|
||||
|
||||
### MS Teams - Default
|
||||
|
||||
*Let's suppose you would like to start receiving alerts from Dynatrace via MS Teams just for your *EMEA PROD*.*
|
||||
|
||||
1. Open *notification.yaml* under your application configuration folder. By default, all notification systems are configured via MS Teams with an
|
||||
https://empty webhook (not configured).
|
||||
2. Create an incoming webhook in MS Teams. [How to?](https://www.dynatrace.com/support/help/shortlink/set-up-msteams-integration#configuration-in-microsoft-teams)
|
||||
3. Add the incoming webhook under the webhook parameter for the `<app_name>-PROD.EMEA-Prod`:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: <Add webhook here>
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: If you want to enable MS Teams for any other hub/stage, follow the same steps but make sure you're under the right configuration:
|
||||
`<app_name>-<stage>.<dynatrace-env>-<stage>:`
|
||||
|
||||
### Email
|
||||
|
||||
*The team prefers to be alerted via email, not MS Teams*
|
||||
|
||||
1. Keep the MS Teams integration disabled, with the https://empty webhook:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: https://empty
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
2. Create a new configuration template under config, using the email template:
|
||||
```
|
||||
config:
|
||||
- CD<app_name>email: email.json
|
||||
```
|
||||
3. Describe the configuration below, using the following template:
|
||||
```
|
||||
CD<app_name>email.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- receivers: distributionEmailexample@bmw.de`
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
|
||||
### ITSM
|
||||
Coming soon!
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Problem+Notification+Integrations) page to configure your notification systems.
|
||||
|
|
@ -1,67 +1,4 @@
|
|||
### Process Group Detection Rules and Naming
|
||||
|
||||
#### Detection Rule or Naming?
|
||||
### How to configure process groups?
|
||||
|
||||
For the explanation, we're using a real example of the Infotainment application:
|
||||
|
||||
!(PGNaming1)[../../../../img/PGNaming1.PNG]
|
||||
|
||||
Before working with your dashboards and alerting profiles, an important task to do when working with Dynatrace is checking
|
||||
the structure of your applications (process groups). You can do that clicking under *technologies* and filter using your
|
||||
application Management Zone.
|
||||
|
||||
In the picture above, there are two Process Groups called bon-information-prod. **If you see duplicated process groups like in**
|
||||
**this case, you MUST follow this guideline**
|
||||
|
||||
Next step would be to open both process groups and compare the metadata. In that way, you can identify if all process instances are
|
||||
part of the same application or not. An easy way to do that is asking yourself: how many instances of my application do i have?
|
||||
|
||||
If you have 4 instances in total and you're able to see 2 in one PG and other 2 in other PG it means that **they are part of the **
|
||||
**same application**
|
||||
|
||||
Another situation could be that checking on the metadata, then you see that are **two different application** and Dynatrace is just naming
|
||||
the process group in the same way
|
||||
|
||||
*Same application*
|
||||
- Problem: Dynatrace is creating two different process groups, what transalates in two separated services for the same application. Instead of
|
||||
seeing all the traffic in one service, you will have it splitted and it will complicate your monitoring
|
||||
- Solution: create a process group detection rule. Contact Dynatrace Expert
|
||||
|
||||
*Different application*
|
||||
- Problem: Dynatrace is just naming in the same way applications that are different.
|
||||
- Solution: This case is less severe, since it can be fixed with a process group naming rule.
|
||||
|
||||
|
||||
What about our example?
|
||||
!(PGNaming2)[../../../../img/PGNaming2.PNG]
|
||||
!(PGNaming3)[../../../../img/PGNaming3.PNG]
|
||||
|
||||
Based on the feedback of the infotaiment team, each process group is a different application (microservice) and it's visible in the kubernetes container/workload
|
||||
within the metadata of each Process Group.
|
||||
|
||||
#### How to create a Process Group Detection Rule
|
||||
1. Open the *conditional-naming-processgroup.yaml* file and create a rule that looks like this:
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
The result of the rule will be renaming the PG to this:
|
||||
```
|
||||
bon-information-prod ipa
|
||||
bon-information-prod rsl
|
||||
```
|
||||
|
||||
Other possible placeholders that you can use are for example:
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesContainerName/[^\\-]*$}
|
||||
{ProcessGroup:KubernetesNamespace} - {ProcessGroup:KubernetesFullPodName/buffet-(.*?)-}
|
||||
{ProcessGroup:DetectedName} - {HostGroup:Name/[^\\_]*$}
|
||||
{ProcessGroup:KubernetesNamespace}
|
||||
{ProcessGroup:CommandLineArgs/.*?\\-f\\s\\/www\\/(.*?)\\/generated\\/httpd\\.conf.*?}
|
||||
|
||||
You can combine different ones. Check the (documentation)[link] for more
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Process+Group+Naming) page to configure your process groups.
|
||||
|
|
@ -1,35 +1,4 @@
|
|||
### Service Naming Rules
|
||||
|
||||
A typical case could be that you access to *Transaction & Services* and you find two services that are exactly the same:
|
||||
*DataDownloadV1*
|
||||
*DataDownloadV1*
|
||||
### How to configure service naming
|
||||
|
||||
If you drilldown into the service and you check in the process group, you may have a PROD and a E2E for each service.
|
||||
|
||||
*Note: if you see that both process group are exactly the same, please contact a Dynatrace expert to create a Process*
|
||||
*Group detection rule*
|
||||
|
||||
In the case the PG are PROD and E2E, then we need to create a rule that looks like this:
|
||||
|
||||
```
|
||||
config:
|
||||
- CDInfotainmentRule1: template.json
|
||||
|
||||
CDInfotainmentRule1:
|
||||
- name: Infotainment Rule 1
|
||||
- nameFormat: {Service:DetectedName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
- tag: Infotainment
|
||||
- skipDeployment: false
|
||||
```
|
||||
|
||||
The rule will get the Service Detected Name (current name) and it will extract (with a regex) the part of the kubernetes namespace after the "-", so -prod or -e2e, resulting in:
|
||||
*DataDownloadV1 - prod*
|
||||
*DataDownloadV1 - e2e*
|
||||
|
||||
Now, services will be easy to identify.
|
||||
|
||||
You can create rules based on any property/metadata. Some other placeholder's eamples:
|
||||
{Service:DatabaseName} - E2E
|
||||
{Service:WebServiceName} - {ProcessGroup:Kubernetes:microservice} - {ProcessGroup:Kubernetes:environment}
|
||||
{Service:DetectedName} - {ProcessGroup:KubernetesContainerName} - {ProcessGroup:KubernetesNamespace/[^-]+$}
|
||||
{Service:DetectedName} - {ProcessGroup:SpringBootProfileName/[^\\-]*$}
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Service+Naming) page to configure your service naming.
|
||||
|
|
@ -1,13 +1,4 @@
|
|||
## Update dashboard configuration
|
||||
|
||||
- Configuration changes (like in dashboards, alerting profiles) must be done via a pull request. Changing a dashboard just in the environment, will cause that it will be overwritten by Monaco.
|
||||
- How to generate changes in your dashboards?
|
||||
1. Modify the dashboard within the Dynatrace UI with the intended changes.
|
||||
2. Copy the JSON of the dashboards. (Can be found under the dashboard settings)
|
||||
3. Paste the copied JSON under the Monaco JSON, overwrite it.
|
||||
4. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
### How to configure dashboards?
|
||||
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Dashboards) page to configure your dashboards.
|
||||
|
|
@ -1,76 +1,4 @@
|
|||
## Management Zones configuration
|
||||
|
||||
### Excluding noisy services
|
||||
### How to configure management zones?
|
||||
|
||||
*If you find services that are not relevant for the analysis, you can exclude them from the MZ.*
|
||||
|
||||
#### HealthResource, PingResource, PrometheusResource services
|
||||
|
||||
*After the deployment of the OneAgent, your services should appear under Transactions & Services. A good practice would be to exclude*
|
||||
*the ones that are not relevant for monitoring. i.e. For some BMW's teams, HealthResource, PingResource, PrometheusResource have been excluded.*
|
||||
|
||||
**How to exclude HealthResource?**
|
||||
1. Open the file *default.json* configuration under the *CD_<app_name>/management-zone/* folder.
|
||||
2. Copy the following rule template:
|
||||
```
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
```
|
||||
2. Add it under the `"type": "SERVICE"` rule's conditions. It should look like this:
|
||||
```
|
||||
{
|
||||
"conditions": [
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"negate": false,
|
||||
"operator": "EQUALS",
|
||||
"type": "TAG",
|
||||
"value": {
|
||||
"context": "CONTEXTLESS",
|
||||
"key": "Component",
|
||||
"value": "{{.tag}}"
|
||||
},
|
||||
{
|
||||
"comparisonInfo": {
|
||||
"caseSensitive": true,
|
||||
"negate": true,
|
||||
"operator": "CONTAINS",
|
||||
"type": "STRING",
|
||||
"value": "HealthResource"
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_NAME"
|
||||
}
|
||||
}
|
||||
},
|
||||
"key": {
|
||||
"attribute": "SERVICE_TAGS"
|
||||
}
|
||||
}
|
||||
],
|
||||
"enabled": true,
|
||||
"propagationTypes": [
|
||||
"SERVICE_TO_PROCESS_GROUP_LIKE",
|
||||
"SERVICE_TO_HOST_LIKE"
|
||||
],
|
||||
"type": "SERVICE"
|
||||
}
|
||||
```
|
||||
|
||||
3. Commit and pull request to merge the branch to the master:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: you can use the same logic to exclude (or include) any other entity to your Management Zone.
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Management+Zones) page to configure your management zones.
|
||||
|
|
@ -1,60 +1,4 @@
|
|||
|
||||
## Configure Notification System
|
||||
### How to configure notification systems?
|
||||
|
||||
### MS Teams - Default
|
||||
|
||||
*Let's suppose you would like to start receiving alerts from Dynatrace via MS Teams just for your *EMEA PROD*.*
|
||||
|
||||
1. Open *notification.yaml* under your application configuration folder. By default, all notification systems are configured via MS Teams with an
|
||||
https://empty webhook (not configured).
|
||||
2. Create an incoming webhook in MS Teams. [How to?](https://www.dynatrace.com/support/help/shortlink/set-up-msteams-integration#configuration-in-microsoft-teams)
|
||||
3. Add the incoming webhook under the webhook parameter for the `<app_name>-PROD.EMEA-Prod`:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: <Add webhook here>
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
Note: If you want to enable MS Teams for any other hub/stage, follow the same steps but make sure you're under the right configuration:
|
||||
`<app_name>-<stage>.<dynatrace-env>-<stage>:`
|
||||
|
||||
### Email
|
||||
|
||||
*The team prefers to be alerted via email, not MS Teams*
|
||||
|
||||
1. Keep the MS Teams integration disabled, with the https://empty webhook:
|
||||
```
|
||||
<app_name>-PROD.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- webhook: https://empty
|
||||
- skipDeployment: "false"
|
||||
```
|
||||
2. Create a new configuration template under config, using the email template:
|
||||
```
|
||||
config:
|
||||
- CD<app_name>email: email.json
|
||||
```
|
||||
3. Describe the configuration below, using the following template:
|
||||
```
|
||||
CD<app_name>email.EMEA-Prod:
|
||||
- name: CD_<app_name> PROD
|
||||
- alertingProfile: CD_<app_name>/alerting-profile/CD<app_name>-PROD.id
|
||||
- receivers: distributionEmailexample@bmw.de`
|
||||
```
|
||||
4. Save and commit changes:
|
||||
```
|
||||
git add <changes>
|
||||
git commit -m "<app_name> configuration changes"
|
||||
git push -u origin <branch>
|
||||
```
|
||||
|
||||
### ITSM
|
||||
Coming soon!
|
||||
Please refer to [this](https://atc.bmwgroup.net/confluence/display/OPMAAS/Documentation+%7C+Problem+Notification+Integrations) page to configure your notification systems.
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue