Thanks for the demo. A couple of feature requests: 1. Would it be possible to add a file watcher so that the job is started once a file is in a certain storage location ? Scheduling is OK, but if you want to replace ADF, the file watcher would be great 2. Notifications to Teams/Slack channels would be awesome 3. Parameters should be autofilled from the notebook widget. A bit annoying that they have to be created manually. 4. Intermediate event where the process pauses and waits for user input before proceeding. Lacking in ADF, and ADB has the UI capabilities to implement it. Make Workflow such that we don't need any other tool/UI to add the missing functionality I mentioned above
@sahilarora6638
2 жыл бұрын
Hi Bilal- I think you have perfectly put up the right question.
@Megashorus
2 жыл бұрын
In my Databricks workspace, the dbt task type option is not visible. Is this still in private preview?
@tanushreenagar3116
Жыл бұрын
Perfect 👌
@mohammedalmatary380
7 ай бұрын
Hello, regarding the deployment of the workflows, how can you deploy the workflow to another environement dev ->qua ?
@frankmunz1659
7 ай бұрын
please check out Databricks Asset Bundles (DABs)
@dhirenpachchigar4693
Жыл бұрын
Is it possible to deploy workflow job from once Databricks to another databricks? Could dyou please guide us
@tiboootibooo
2 жыл бұрын
Thanks for the overview! 1. Can you not set up a different git repo for each task, like task 1 fetching from ETL common repo and task 2 depending on a ML repo. 2. Also, any way to pass a variable from a task to another one, like with the notebooks workflows api and globalTempView?
@rbharath89
2 жыл бұрын
1. Assuming we have a failed notebook task, Will repair run rerun the entire notebook or from the failed cell in that notebook ? 2. Is it possible to configure tasks on failed runs ? Say trigger a cleanup notebook task if load task fails ?
@M0481
2 жыл бұрын
1. It will rerun the entire notebook. 2. I don't think that's possible, although you can always build that into the notebook itself. Note that I am not affiliated with Databricks, so forgive me if I am wrong :)!
@rbharath89
2 жыл бұрын
@@M0481 thank you...
@lingxu9697
2 жыл бұрын
@@M0481 I dont think that is the case, if you look in the demo what will be rerun, it starts from the failed the task instead of the first task
@M0481
2 жыл бұрын
@@lingxu9697 The question was related to the notebook and the individual cells of the given task. To illustrate, a job may have >=1 tasks, each task has one notebook, each notebook has >=1 cells. Previously a job failing meant having to re-run the entire job and thus all of the tasks and their respective notebooks. Now, if a job fails, we can repair the run by only running the tasks and their respective notebooks that failed. Now, within a task, it is not possible to re-run part of the task from a given cell as that would require Databricks to persist the state of the task/notebook.
@kaladharnaidusompalyam851
2 жыл бұрын
Can I know who is the speaker here ?
@RobinBloehm
2 жыл бұрын
Frank Munz
@kaladharnaidusompalyam851
2 жыл бұрын
@@RobinBloehm any Udemy videos that available to take course
@frankmunz1659
Жыл бұрын
@@kaladharnaidusompalyam851 thanks for asking. try this to get started: kzitem.info/news/bejne/wK-cp6R3e5OWdaA
Пікірлер: 17