Link to the video : https://www.youtube.com/watch?v=hfsgVrKedJM
Step-1
Create a Remove filter button as shown below
Step-2
Create a bookmark to clear filter and update the bookmark with all the data in that page.
Step-3
Tag that bookmark to the created button as shown below.
Step-4
Now, after the drill through is applied, you can go the page and click
on remove filters to remove the drill through filter and show all the data.
Once Slicer is selected and Close button is clicked, Then the filter is not reflect Since we need to make some changes in the bookmark as shown below
Step-1
Click on the bookmark which is pointing for Close Button.
Step-2
Uncheck the data as shown below
https://www.youtube.com/watch?v=cyOquvfhzNM
https://www.youtube.com/watch?v=MeLaZFst02E
Step-1
Create a measure like below.
Cascading=
import pandas as pd
from azure.storage.blob import BlobServiceClient
import pyodbc
# ------------------ Azure Blob Storage Configuration ------------------
blob_connection_string = 'COnncetionstring'
blob_container_name = 'Blobcontainer Name'
# Connect to Blob Storage
blob_service_client = BlobServiceClient.from_connection_string(blob_connection_string)
container_client = blob_service_client.get_container_client(blob_container_name)
# Fetch file names from Blob Storage
blob_file_names = [blob.name for blob in container_client.list_blobs()]
df_blob_files = pd.DataFrame(blob_file_names, columns=['FileName'])
# ------------------ Azure SQL Database Configuration ------------------
sql_server = 'Server Name'
sql_database = 'Databse'
sql_username = 'user name'
sql_password = 'password'
sql_driver = 'ODBC Driver 17 for SQL Server'
sql_table = 'FileName table'
# Connect to SQL Server and fetch file names
sql_conn_str = f'DRIVER={sql_driver};SERVER={sql_server};DATABASE={sql_database};UID={sql_username};PWD={sql_password}'
sql_query = f'SELECT B2BFileName FROM {sql_table}'
with pyodbc.connect(sql_conn_str) as conn:
df_sql_files = pd.read_sql(sql_query, conn)
# ------------------ Comparison Logic ------------------
# Find files in Blob Storage that are not in SQL Server
missing_files = df_blob_files[~df_blob_files['FileName'].isin(df_sql_files['B2BFileName'])]
# ------------------ Export to Excel ------------------
output_file = r'C:\Users\Desktop\MissingBlobFiles.xlsx'
missing_files.to_excel(output_file, index=False)
print(f"Missing file names exported to: {output_file}")
3.
.isin(...)
This method checks whether each value in
df_blob_files['FileName']
exists in the list of values fromdf_sql_files['B2BFileName']
.It returns a boolean Series —
True
if the file name exists in SQL Server,False
if it doesn't.4.
~
(Tilde Operator)This is a logical NOT operator. It inverts the boolean Series returned by
.isin(...)
.So now, it marks
True
for file names not found in SQL Server.
Please look into the below video to view of reference labels and details in new card visual
Link to video---https://www.youtube.com/watch?v=XTLo64sydck
Link to download pbix File--https://github.com/powerbibro/powerbibro/blob/main/PBI%20-%2020240309%20-%20Reference%20Label%20Updates.pbix
The video outlines a "SMART" framework for software engineers to leverage AI, and it recommends several tools under each letter of this framework to enhance productivity and skill.
Here's a summary of the tools mentioned:
S - Study This section focuses on using AI to study and learn effectively.
M - Make This part focuses on designing and presenting.
A - Automation This section covers AI coding tools.
R - Run This segment focuses on running terminal commands and programs.
T - Trial & Error This final piece emphasizes staying updated and experimenting with new tools due to the rapid evolution of technology.
The video encourages viewers to pick one or two tools to integrate into their workflow and continuously stay updated with new advancements in the market.
Kai concludes by stressing the importance of cross-referencing information across these different platforms to ensure alignment and gain diverse perspectives, as analyst predictions and free features can vary. He also prefers visual graphs for easier trend identification. All these websites also provide mobile apps.
Ref Video : https://www.youtube.com/watch?v=Fd2tU-qull8
STEP-1- > Go to power query editor
STEP-2-> click on the table you want to change the data source
STEP-3 - > go to advance editor, copy the code in advance editor of the data source which needs to changed as data source
To run a Python program automatically using Windows Task Scheduler, follow these steps:
Create a Python Script:
Ensure you have a Python script (.py file) that you want to run. For example, example.py
.
Open Task Scheduler:
Win + R
to open the Run dialog.taskschd.msc
and press Enter to open Task Scheduler.Create a Basic Task:
Create Basic Task
in the Actions pane.Next
.Set the Trigger:
Next
.Set the Action:
Start a program
and click Next
.Specify the Program to Run:
Program/script
field, enter the path to the Python executable. For example, C:\Python39\python.exe
(adjust the path according to your Python installation).Add arguments (optional)
field, enter the path to your Python script. For example, C:\path\to\your\script\example.py
.Start in (optional)
field, enter the directory where your script is located. This can help if your script relies on relative paths.C:\Python39\python.exe
C:\Users\BKa\Desktop\Mail-Test.py
C:\Users\BKa\Desktop
Finish the Task:
Finish
.Optional Settings:
Properties
.General
tab, you can configure the task to run with highest privileges if needed.Conditions
and Settings
tabs, you can adjust additional settings like stopping the task if it runs for too long, or running the task only if the computer is idle.By following these steps, your Python script will run automatically according to the schedule you set in Windows Task Scheduler.