Just discovered the channel. Your material is hight quality. It's excellent work. I will go watch more. Thank you Adam !
@AdamMarczakYT
4 жыл бұрын
Thank you. This means much :)
@pradeeps3671
3 жыл бұрын
Hello Adam, pls let me know how to connect to dynamic crm .. Pls send detail to pradysg@gmail.com
@bifurcate-ai
4 жыл бұрын
Adam, I have been watching many of your videos. As someone new to Azure, i find your videos immensely valuable. Keep up your great work, really appreciate!
@AdamMarczakYT
4 жыл бұрын
Awesome, thank you!
@rosszhu1660
4 жыл бұрын
A quick question, Azure dataset seems only support already structured data, like CSV or JSON, what if my datasource is an unstructured text file that must be transformed into csv before being used? Is there a way to do this transformation (possibly python code) in data factory?
@AdamMarczakYT
4 жыл бұрын
Hey, you can call azure databricks which can transform any file using Python/Scala/R etc. But data factory itself can't do it.
@rosszhu1660
4 жыл бұрын
@@AdamMarczakYT Got it. Thanks a lot! It looks like I have to learn Spark :-)
@davidakoko3308
4 жыл бұрын
Hi Mr adam how are you? been trying to use the add function to add two columns of numeric value but the result is wrong E.G ADD(COLUMN_A, COLUMN_B) RESULT =COLUMN_AB instead of adding the values. lets say column_a have value of 334 and coumn_b have value of 4 result is giving 3344 instead of 338. please can you help. Nice video BTW. thanks
@AdamMarczakYT
4 жыл бұрын
Check out concat function docs.microsoft.com/en-us/azure/data-factory/data-flow-expression-functions#concat
@BijouBakson
2 жыл бұрын
It must be very challenging to do all this thing in English for you I imagine, Adam! Congratulations for pushing through despite the difficulty. 🙂
@KkrDs97
3 жыл бұрын
instead of scala functions is there a way we can use the pyspark functions for debug, BTW these are great videos thankyou
@AdamMarczakYT
3 жыл бұрын
Unfortunately not at this time. if you need more complex constructs or different languages you need to use Databricks or HDInsight :)
@MrDamianKrol
4 жыл бұрын
Ciekawe filmy, czy mozna znalezc na Twoim kanale film: ADF - Azure Batch Account - Python ? Z pozdrowieniami,
@AdamMarczakYT
4 жыл бұрын
Dzięki, niestety nie mam nic pod batch account z ADF. Rzadko wykorzystywany przypadek u mnie.
@hovardlee
3 жыл бұрын
-1979 and ,12 This is why complex logic is needed. Nice tutorial :)
@anubhav2020
3 жыл бұрын
Hello Adam, thanks a bunch for this excellent video. The tutorial was very thorough and anyone new can easily follow. I do have a question though. I am trying to replicate an SQL query into the Data Flow, however, I have had no luck so far. The query is as follows: Select ZipCode, State From table Where State in ('AZ', 'AL', 'AK', 'AR', 'CO', 'CA', 'CT'...... LIST OF 50 STATES); I tried using Filter, Conditional Split and Exists transforms, but could not achieve the desired result. Being new to the Cloud Platform, I am having a bit of trouble. Might I request you please cover topics like Data Subsetting/Filtering (WHERE and IN Clauses etc.) in your tutorials. Appreciate your time and help in putting together these practical implementations.
@mangeshxjoshi
4 жыл бұрын
hi , does Azure Data factory can be used to Replace IBM DataStage Mappings transformation. as ibm datastage is a etl tool and azure data factory is a managed data integration service on cloud. does azure data factory supports only blob storarage , azure cosmos db (sql api) , azure data lake storage , azure sql data warehouse azure sql database only ? Apart from these , does Azure Data factory connects to SAP HANA , SAP bw , oracle . are there any connectors being used to pull data from other sources like SAP hana ,oracle etc
@AdamMarczakYT
4 жыл бұрын
Hey, in general ADF has 80+ connectors, including SAP and Oracle. You use those to copy data from those sources to blob storage and then trigger mapping data flow pipeline to get data from blob storage (or data lake), transform it and output it back to blob (or one of supported output systems). Where ADF copies it to designated place.
@sharmilashrestha5449
3 жыл бұрын
Adam, your tutorial are very simple to follow and provides lot of insight. However, I could not set up my data flow in the way I wanted. What I want to do is -> Get a list of integer from Table 1 (on-prem SQL server) -> Use this list of integer to query a cosmos structured stream -> Then I want to perform some transformation with the data returned from this cosmos stream -> sink transformed data back to Table 2 (on-prem SQL server). However, I do not see my source datasets on the data flow drop down. Any help would be appreciated.
@AdamMarczakYT
3 жыл бұрын
Make sure to check if you use supported data source types docs.microsoft.com/en-us/azure/data-factory/data-flow-source?WT.mc_id=AZ-MVP-5003556
@PicaPauDiablo1
4 жыл бұрын
Adam, is there a way to preserve the filename and just have it change the extension? For instance, I'm adding a column with datetime, but at the end I would like it to have the same file name, just parquet. Is there a way to do that?
@AdamMarczakYT
4 жыл бұрын
Use expressions :) That's what they are for.
@PicaPauDiablo1
4 жыл бұрын
@@AdamMarczakYT Sorry if it was a dumb question, I'm still new to ADF. Ignore if it's too inane but is fileanem in the @pipeline parameter? I found one online but couldn't get it to parse.
@eshaandevgan312
3 жыл бұрын
I have a question, please help. I am not able to understand why DataFlows need to have their own data sets. Why not use the pipeline datasets. This will help me a lot. Thanks in advance.
@AdamMarczakYT
3 жыл бұрын
It can use pipeline datasets, but not all types/source systems are supported.
@eshaandevgan312
3 жыл бұрын
@@AdamMarczakYT Thanks Adam, and your videos are very nice. Keep it up.
@JohnJohnson-bs4cw
3 жыл бұрын
Great Video. Can you use data from a REST Api as a source for a Mapping Data Flow or does the source have to be a dataset on Azure?
@AdamMarczakYT
3 жыл бұрын
Here is the list of supported data sources for MDF docs.microsoft.com/en-us/azure/data-factory/data-flow-source?WT.mc_id=AZ-MVP-5003556 . Just copy data from REST API to Blob and then start MDF pipeline using that blob path as a parameter.
@balanm8570
4 жыл бұрын
As useful another Awesome video Adam !!!. Excellent. It was to the POINT !!!. Keep up the good work which you have been doing for plenty of users like me. Eagerly waiting for more similar videos like this from you !!!. Can you please have some videos for Azure Search ...
@AdamMarczakYT
4 жыл бұрын
Thank you so very much :) Azure Search is on the list but there is so many news coming from Ignite that I might need to change the order. Let's see all the news :).
@gursikh133
4 жыл бұрын
Adam, FOr using transformation do I need to learn scala. Or just refer the documentation you specified for scala functions and write the transformation?
@AdamMarczakYT
4 жыл бұрын
Documentation should be enough. MDF is targeting simple transformations so in most cases documentation alone will suffice.
@yashmeenkhanam3451
4 жыл бұрын
Outstanding !You just made Azure easy to learn. Thank you.
@AdamMarczakYT
4 жыл бұрын
Awesome, thank you!
@johnfromireland7551
3 жыл бұрын
ADF is but just one part of about 100 significant tools and actions in Azure. :-(
@CallousCoder
2 жыл бұрын
Hi Adam, is it possible to create these pipelines as code as well? Or somehow create them from my actual Azure pipeline? It would be sheerly insane (but it is a Microsoft product) to require and maintain two pipeline one that’s yiur Azure pipeline for CI and CD and one for the ADF. I really would want the Azure pipeline to be able to fill/create the ADF pipeline. But I haven’t found anything yet.
@Cool2kid
4 жыл бұрын
Your video content is awesome!!! Your video is very useful to understand Azure concept specially for me who just started Azure journey. I would like to have one video where we can see how to deploy code from Dev to QA to Prod. How to handle connection string, parameter etc during deployment. thanks again for wonderful video content.
@AdamMarczakYT
4 жыл бұрын
ADF CI/CD is definitely on the list. It's a bit complex topic to get it right so it might take time to prepare proper content around this. Thanks for watching and suggesting ;)
@achraferraji3403
2 жыл бұрын
Amazing Video, we want other parts !
@jagadeeshpinninti3456
4 жыл бұрын
can you please explain who to connect source dataset from azure data lake storage gen 2 tables in data flows of Azure data factory?
@AdamMarczakYT
4 жыл бұрын
It's the same as blob storage, just create linked service and select Azure Table Storage and create dataset for it. Not that this is not supported for Mapping Data Flows.
@kevinabraham92
2 жыл бұрын
Nice video. Just curious. Can you explain toInteger(trim(right(title,6),'()')) in detail please. Like how this command executes?
@TheLastBeat2
3 жыл бұрын
Hi Adam, so glad I found your channel. Your videos were a big help for achieving the AZ900 certificate. Now I am studying a lot to uplift my knowledge and get the Azure data engineer certificate. However, I have an important question! Data flows are expensive, sometimes clients don’t want to use this, are there alternatives to achieve the same result in azure data factory? Thank you very much!
@AdamMarczakYT
3 жыл бұрын
Well you can't have the cookie and eat the cookie :) In my opinion it's not that expensive compared to other available tools.
@TheLastBeat2
3 жыл бұрын
@@AdamMarczakYT True! I am currently struggling with csv files that sometimes have extra spaces after the words in the header, this then gives error when doing a copy activity to Azure SQL Database. Do you have any idea to make my flow a bit more flexible so that it can deal with this? It needs some trimming in the header
@TheLastBeat2
3 жыл бұрын
I thought of doing a SELECT in a dataflow to then change to the correct header titles, but for this I need to know where the spaces will be in the future. So also not flexible.
@generaltalksoflife
3 жыл бұрын
Hi Adam, Thank for helping us in learning new technologies. You are awesome 👌🏻👌🏻👌🏻👏👏.
@AdamMarczakYT
3 жыл бұрын
My pleasure!
@eramitgoswami
3 жыл бұрын
Your way of explaining is outstanding, after watching it feel like Azure is very easy to learn. kindly keep sharing good videos Thank You..
@AdamMarczakYT
3 жыл бұрын
Thanks a ton :)
@Montreal_powerbi_connect
3 жыл бұрын
Wow,I like your video, I did it today. and I had good result. thanks for your good explanation.
@AdamMarczakYT
3 жыл бұрын
Great job! Thanks!
@naveenkumar-tb1de
4 жыл бұрын
Hi Adam, please add some more contents about new features of dataflow, it's your channel only where I see azure add, no one teaches better than you do as I have compared with many channels.
@AdamMarczakYT
4 жыл бұрын
Thank you! What kind of features would you think would be interesting to see?
@rohangarg6843
3 жыл бұрын
Great video Adam. What is the difference between Data Flow and Copy Activity within pipeline. When shall we go for Data flow creation instead of copy activity ?
@AdamMarczakYT
3 жыл бұрын
Copy Activity just moves data in 1:1 fashion (or a subset of column). Data flow allows for data transformations/joining/aggregations of multiple input/outputs in a single step.
@dwainDigital
3 жыл бұрын
How do you delete from target based on data from the Source? I'm really struggling to understand if i have a column with a value that I want to delete in the target table. Everything seems to be geared up to altering source data coming in
@hamidmushtaq7611
3 жыл бұрын
Wouldn't it be simpler to do all of this using code.
@AdamMarczakYT
3 жыл бұрын
Maybe, but low-code solutions allow for data scientists and people with less technical knowledge of programming languages to perform data transformations. Code isn't always the best way, in fact for some apps it's not a good thing. :)
@joyyoung3288
2 жыл бұрын
an error message e.g. handshake_failure when the data flow source retrieve data from API, can anyone help? thanks.
@paulnelson1623
3 жыл бұрын
For anyone wondering how to make the year check (or any check) in the second step more robust, you can exchange the following expressions using the 'case' expression as used below which says, if this expression evaluates as true, do this, else do something else. Worth nothing here that in the first expression, there is only a true expression provided while the second expression has both true and false directives. As per the documentation on the 'case' expression: "If the number of inputs are even, the other is defaulted to NULL for last condition." /* Year column expression */ /* If the title contains a year, extract the year, else set to Null */ case(regexMatch(title, '([0-9]{4})'),toInteger(trim(right(title, 6), '()'))) /* title column expression*/ /* If the title contains a year, strip the year from the title, else leave the title alone */ case(regexMatch(title, '([0-9]{4})'),toString(left(title, length(title)-7)), title)
@AdamMarczakYT
3 жыл бұрын
Thanks Paul :) I used as simple example as possible for people who aren't fluent in scala but of course you always need to cover all possible scenarios. Sometimes I like to fail the transformation rather than continue with fallback logic as I expect some values to be present.
@paulnelson1623
3 жыл бұрын
@@AdamMarczakYT Of course, I just wanted to see if I could take it a step further to align more closely with what would be needed in a production data engineering scenario and thought others may have the same idea. Thanks for the content! :)
@AdamMarczakYT
3 жыл бұрын
Thanks, I bet people will appreciate this :)
@mohitjoshi1361
3 жыл бұрын
Does any of these option changed now? Because I am not able to see any data debug option to be enabled, and directly preview data in dataset itself.
@yashnegi9473
2 жыл бұрын
Video is excellent. I want to know the problem statement which Data flow is solving?
@samb9403
2 жыл бұрын
Great video. Question: Under "New Datasets", is there a capability to drop data into Snowflake? I see S3, Redshift, etc. I appreciate the video and feedback!
@tenghover
3 жыл бұрын
Would you plan to make video for introduction of each transforamtion components? Thanks
@ahmedmj8729
Жыл бұрын
Hello Adam , i follow these steps but i have a problem : i didn't find the source columns when i go to derived column component to write expression based on existing column. in your video , total columns in source component show = 3 , for me =0 ? i changed the source from csv to sql table and i didn't found the solution.
@nidhisharma-rb7nx
2 жыл бұрын
Adam, great video.I m new to Data Flow and I have one doubt, I want to implement File level checks in Data Flow but not able to do it. All tasks are performing data level checks like exist or conditional split. Is it possible to implement File level check like whether file exist or not in storage account?
@notonprem
Ай бұрын
This is quality stuff. Good for a quick upskill especially when prepping for an interview.
@abhim4nyu
Жыл бұрын
Will it work with pipe (“|”) separated value file instead of csv?
@javm7378
2 жыл бұрын
I really like your tutorials. I have been looking for a "table partition switching" tutorial but haven't found any good ones. May be you could do one for us? I am sure it'll be very popular as there aren't any good ones out there and it is an important topic in certifications :-)
@lavanyay2767
3 ай бұрын
very very detailed work flow , i tried this and able to understand Data flow process so easily . Thank you for the wonderful session.
@snackymcgoo1539
3 жыл бұрын
I call foul. There is a space between movie title "Toy Story" and the year "(1995)". The formula should return "Toy Story " with a space on the end not "Toy Story". But reviewing the output, the space is trimmed off regardless of these 2 expressions: left(title,length(title)-6) left(title,length(title)-7) Both yield the exact output of "Toy Story" with no space. This is not ok.
@AdamMarczakYT
3 жыл бұрын
You can always use ltrim function too! :) The expressions and data used in the demo is from Microsoft's example.
@mustafakamal5945
3 жыл бұрын
Hi Adam, Thanks for making this videos, very clear and concise. I have a question (sorry not related to this video) regarding Conditional split - Can the output stream activities, run in parallel ?
@AdamMarczakYT
3 жыл бұрын
They typically run in parallel as it's Apache Spark behind the scenes.
@mustafakamal5945
3 жыл бұрын
@@AdamMarczakYT Thank you !
@chandrasekharnallam2578
3 жыл бұрын
excellent explanation with simple scenario. Thank you.
@AdamMarczakYT
3 жыл бұрын
Glad it was helpful!
@fadiabusafat5162
3 жыл бұрын
Nice one Adam. Cool one. Keep doing fabulous videos always fella. Many THanks.
@eddyjawed
7 ай бұрын
Thank you Adam Dzienkuje, this is a great tutorial.
@big-bang-movies
3 жыл бұрын
Hi Adam, few doubts. Please help me understand. 1. 10:04, After running the dataflow 1st time, there are 9125 rows got populated. Well, there is no output sink or output dataset associated with it dataflow yet, then where exactly those ingested rows are getting saved/populated? 2.15:04, after re-calculating "title" (by removing the year part), how come the previous original column (title) got disappeared? The modified title column should appear in addition to the previous original column (title) right?
@AdamMarczakYT
3 жыл бұрын
hey 1. it's amount of rows loaded. 2. if you create new column with the same name it will replace old one. In this case we replaced title column.
@SairamPoluru
4 жыл бұрын
I got stuck at Derived Column, since not able to get columns from source (Movies input)
@AdamMarczakYT
4 жыл бұрын
Check if you selected 'first row as headers' checkbox in your dataset. Also there is a button preview data there which you can use to check if you properly set up your dataset. If all will fail, I often advise my colleagues to just delete everything and start over, while sounds a bit weird it really sometimes is faster than finding the issue and provides nice learning curve, Good luck!
@seb6302
4 жыл бұрын
I have the same issue - did you resolve it?
@SIDDHANTSINGHBCE
3 жыл бұрын
These videos are great. Helping me so much! Thanks Adam
@AdamMarczakYT
3 жыл бұрын
Glad you like them!
@mrjedrek1112
3 жыл бұрын
Hi, I have an issue and I am wandering if you could help me. I have created similar data flow. When I run a pipeline with this data flow inside I can see that new file was created in my Data lake. Unfortunately, this file is always empty, but when I click preview data within data flow (in a sink tool) I can see data. Empty means it has column names, but it doesn't have any data. This file is CSV
@AdamMarczakYT
3 жыл бұрын
Which file are you checking? Mapping Data Flow creates many files in the output to follow partitioned model which is HDFS compatible. Typically there is an empty file and a folder which inside contains partitioned data.
@techBird-b2m
2 жыл бұрын
👍 Its amazing , Practical implementation of Data Flow.
@Raguna
2 жыл бұрын
Very good explaining the Data Flow. Thanks Mr.Adam.
@sarahaamir7457
4 жыл бұрын
Thank you so much Adam! this was very clear and great video and a big help for my interview and knowledge.
@AdamMarczakYT
4 жыл бұрын
Very welcome! Thanks for stopping by :)
@soumikdas7709
3 жыл бұрын
Your videos are very informative and practical oriented. Keep doing .
@AdamMarczakYT
3 жыл бұрын
Thank you, I will!
@RahulRajput_018
3 жыл бұрын
Thanks buddy ...Great work
@AdamMarczakYT
3 жыл бұрын
My pleasure
@549srikanth
3 жыл бұрын
I would say this is the best content I've seen so far!! Thank you so much for making it Adam! Just wondering, is there a Crtl+Z or Crtl+Y command in case we did some changes in the dataflow and restore it to previous version?
@AdamMarczakYT
3 жыл бұрын
Awesome, thanks! Unfortunately not, but you can use versioning in the data factory which will allow you to revert to previous version in case you broke something. Highly recommended. Unfortunately not reverts for specific actions.
@549srikanth
3 жыл бұрын
@@AdamMarczakYT Excellent!! Thank you so much for your reply!
@johnfromireland7551
3 жыл бұрын
@@549srikanth I publish each time I create a significant new step in the pipeline and I use data preview before moving on to the next step. Also, you can , I think, export the code version of the entire pipeline. Presumably you can, then, paste that into a new Pipeline to resurrect your previous version.
@GiovanniOrlandoi7
3 жыл бұрын
Great video! Thanks Adam!
@AdamMarczakYT
3 жыл бұрын
My pleasure!
@KarthikeshwarSathya
3 жыл бұрын
This was explained very well. thank you.
@AdamMarczakYT
3 жыл бұрын
You're very welcome!
@seb6302
4 жыл бұрын
I have an issue with the column 'title' not being found in the derived column despite being able to see all the column in the source beforehand.. Very confused!
@seb6302
4 жыл бұрын
When attempting to aggregate - no columns are found. Again despite seeing them in the source.
@seb6302
4 жыл бұрын
I've rebuilt the whole thing and still face the same issue. Google yields no results either.. Does anyone know what i'm doing wrong?
@seb6302
4 жыл бұрын
Just tried again and it works! The only difference this time round was that I didn't enable data flow debug. No idea why it worked this time.
@seb6302
4 жыл бұрын
Also 'Actions' no longer exists under pipeline - Is there a new way to view the details pane? I can't seem to find one.
@seb6302
4 жыл бұрын
These actions can be now found if you hover over 'Name'!
@rahulkota9793
3 жыл бұрын
Very useful. Thank you so much.
@AdamMarczakYT
3 жыл бұрын
Glad it was helpful!
@oathkeepersapphirelands
4 жыл бұрын
How do you solve the parallel execution of your pipeline when triggered by events to avoid duplicates?
@AdamMarczakYT
4 жыл бұрын
You need to do this as part of your flow design. Unfortunately some things can't be solved by tools. Thanks for watching! :)
@oathkeepersapphirelands
4 жыл бұрын
@@AdamMarczakYT Ok maybe handle it by Run ID I guess :)
@joshuaodeyemi3098
Жыл бұрын
I love you, Adam! I have been struggling with using expression builder in Data Flow. I can't seem to figure out how to write the code. This video just made it look less complex. I'll be devoting more time to it.
@kirankumarreddykkr9606
10 ай бұрын
can you pyspark or sql in Expression functions ? are only scale
@Eubilecki
4 жыл бұрын
Can I do the same process with SQL? how does the partition work in this case? Separate into SQL DataBase or just creates one single table with the results?
@AdamMarczakYT
4 жыл бұрын
You mean using data flows with on premise SQL? Well data flow can't run on self-hosted integration runtime so your best bet is to copy data to Azure blob, transform it and then put it back to on prem sql. Remember underneath this, it's databricks so data flows are not running on SQL.
@dimitarkrastev6085
Жыл бұрын
Great video! Most videos seem to focus mostly on the evertisement material straight from Azure. At best they show you the very dumb step of copying data from a file to DB. This is the first video I saw where you actually show how you can do something useful with the data and close to real life scenario. Thank you.
@jayakrishna9153
4 жыл бұрын
could you please share me the csv file that ur using fo this demo..
@AdamMarczakYT
4 жыл бұрын
Good catch I forgot to add this to description. You can find the file here. github.com/MarczakIO/Azure4Everyone/tree/master/AzureDataFactory-MappingDataFlow-Intro
@subhodipsaha7608
4 жыл бұрын
I just find your videos while searching for ADF tutorials in youtube. The materials are fantastic and really helping me to learn. Thank you so much!!
@AdamMarczakYT
4 жыл бұрын
Happy to help! :)
@charlesdarkwind
4 жыл бұрын
I tried Data Flow the other day and it was unbelievably buggy, literally everything I tried was not working and errors made no sense, performance was terrible also. Clearly not ready!
@AdamMarczakYT
4 жыл бұрын
This is surprising. We used it quite a bit for small projects and it performed fine. What kind of errors/bugs did you encounter? I'm on MS ignite right now so I can ask product group about it.
@MrVivekc
3 жыл бұрын
very good explanation Adam. keep it up.
@AdamMarczakYT
3 жыл бұрын
Thanks, will do!
@MrVivekc
3 жыл бұрын
@@AdamMarczakYT Adam do we have trail version of Azure for Learning purpose?
@Lakshmi-y4x
2 ай бұрын
Thank you, very helpful tutorials
@jagerzhang4059
3 жыл бұрын
if I chage the value of the date ,how can I do it with everyday for tigger time. I mean pass the datetime by tigger time as blob path
@AdamMarczakYT
3 жыл бұрын
There are some properties available in the trigger which can be used. Example @trigger().startTime docs.microsoft.com/en-us/azure/data-factory/concepts-pipeline-execution-triggers?WT.mc_id=AZ-MVP-5003556#trigger-type-comparison
@horatiohe
3 жыл бұрын
thanks for the great content!! you are the man :)
@AdamMarczakYT
3 жыл бұрын
I appreciate that!
@dintelu
3 жыл бұрын
Wow..lucid explanation..
@AdamMarczakYT
3 жыл бұрын
Glad you think so!
@omarsantamaria6871
4 жыл бұрын
Hello Adam. Your video is impressive, as always, but I'm concerned about the source dataset. Question: Does the DataFlow activity only work if the datsource are connected to Azure SQL? I tried using a previous dataset connected to the local server, but this dataset does not appear on the Source settings / Source options / Source dataset in DataFlow activity option. I tried with New option and it is only enabled to select the AZURE dataset. All options in the database are disabled. So I couldn't create a data set for SQL Server neither.
@AdamMarczakYT
4 жыл бұрын
Hey mapping data flows currently support 6 data services for both source and sink. docs.microsoft.com/en-us/azure/data-factory/data-flow-source#supported-source-connectors-in-mapping-data-flow I'd check if you can trick data flows by using Azure SQL connector to connect to on premise SQL server, but I never personally tried.
@lowroar5127
2 жыл бұрын
So helpful! Thank you very much Adam!
@shashankkharade2694
3 жыл бұрын
does DFT support SAP as a data source?
@AdamMarczakYT
3 жыл бұрын
Some SAP products are supported, always check official documentation docs.microsoft.com/en-us/azure/data-factory/connector-overview?WT.mc_id=AZ-MVP-5003556
@shashankkharade2694
3 жыл бұрын
@@AdamMarczakYT thank you. and is there any way that one can access files from linux os based system as we cant install self hosted IR on linux?
@muralikanala2826
3 жыл бұрын
Hi Adam, can you suggest me how to perform custom transformations on our data?
@AdamMarczakYT
3 жыл бұрын
If Mapping Data Flow doesn't suffice then maybe try Azure Databricks. Check my tutorial and decide yourself. I use it in pretty much any project. :)
@rajanarora6655
3 жыл бұрын
Your videos are really great and helped me understand lot of concepts of Azure. Can you please make one using SSIS package and show how to use that within Azure Data Factory
@yashgemini4024
4 жыл бұрын
Appreciate you content. Thanks.
@AdamMarczakYT
4 жыл бұрын
My pleasure! :)
@AutomationBIAI
3 жыл бұрын
Hi Adam, what is the language for dataflow ? how to find more resouce for it ?
@AdamMarczakYT
3 жыл бұрын
It's scala :)
@adiky
4 жыл бұрын
Hi Adam, that's a great tutorial, many thanks for it. I have a question that can we write the transformation functions in different language like Python or R instead of Scala? If yes can you please share some details on it?
@AdamMarczakYT
4 жыл бұрын
Unfortunately not right now :( If you need those then use Azure Databricks instead.
@rubendarksun6691
2 жыл бұрын
Or a python/R script in a batch process, right? Databricks would be better option of you need spark, since its also more expensive than batch
@isurueranga9704
3 жыл бұрын
best tutorial ever... 💪🏻💪🏻💪🏻
@Lego-tech
4 жыл бұрын
Very crisp and clear information, I watched many videos but Adam's contents are awesome!! Thanks dear!! All the best for future good work!!
@AdamMarczakYT
4 жыл бұрын
Thank you so much 🙂
@JoeandAlex
3 жыл бұрын
Adam, a question on Data Flow, do we have a feature available in ADF, where we can call data flows dynamically in a single pipeline?
@AdamMarczakYT
3 жыл бұрын
Not sure that I understand, video shows how to execute MDF from ADF pipeline,
@JoeandAlex
3 жыл бұрын
@@AdamMarczakYT thank you. My question was, if I have 5 data flows in place and want to execute those 5 data flows using a single pipeline, is it possible? if yes, do we need to create 5 different data flow activities in the same pipeline, or is there any other way?
@DrDATA-ep6mg
3 жыл бұрын
Very nice tutorial 👍
@AdamMarczakYT
3 жыл бұрын
Thank you! Cheers!
@MrSARAZZ
4 жыл бұрын
Hi Adam, just watched two of your videos on Azure Data Factory, nice work. Any chance you can do one on ADF using REST API as a data source with a JSON output, then store in a SQL Server sink?
@AdamMarczakYT
4 жыл бұрын
Great suggestion! I'll add it to the list of potential topics. :) thanks for watching ;)
@harshapatankar484
3 жыл бұрын
Amazing videos.
@AdamMarczakYT
3 жыл бұрын
Glad you think so! :)
@subhraz
2 жыл бұрын
Very well explained and demonstrated. Really helpful to get started with Data flows.
@indhumathi4727
3 жыл бұрын
I need to join header with data. header is dynamic. how can i retain the order of merge ?
@AdamMarczakYT
3 жыл бұрын
Not sure if data factory maintains the order during the union.
@ovespathan1
4 жыл бұрын
Hi Adam Your videos are really informative. They have helped me a lot in my Azure learning path. Can you please help me on how to do incremental load from csv files to our SQL tables using mapping data flows i.e. in my file there are some new records as well as update for some already existing records. I want to do this insert and update through mapping data flow. Thanks in advance.
@AdamMarczakYT
4 жыл бұрын
Thanks, it sounds like you want upsert rather than incremental load so you should check alter row action. More here docs.microsoft.com/bs-latn-ba/azure/data-factory/data-flow-alter-row :)
@susanmyers1
4 жыл бұрын
@ Work I'm having to build out a Data Mart with no training on my own. You are literally saving my hide with your videos. THANK YOU!
@AdamMarczakYT
4 жыл бұрын
Glad to help! :)
@joeyt.1504
4 жыл бұрын
Great video's, thank you! Just a question; what is the difference between a storage account and a data lake? Costs and type of saved data?
@AdamMarczakYT
4 жыл бұрын
Hi, thanks! You might be interested in checking out my video on Data lake kzitem.info/news/bejne/k6uJ0Z54fqmopGU it goes into great detail on all the differences.
@arun06530
3 жыл бұрын
nice & detailed video.
@AdamMarczakYT
3 жыл бұрын
Thank you!
@mohmmedshahrukh8450
Жыл бұрын
best video on azure I have ever seen❤❤
@mohmedaminpatel4427
3 жыл бұрын
Around 20:20, We can see there is just one partition, does Azure automatically decide the number of partitions it needs to divide the dataset into ? Also is it done at some cost i.e. more partitions cost more or is it complementary ? Thank you for all the tutorials, I am binge watching them since 3 days now and thoroughly enjoying them ! Would love to see some tutorials for Synapse as well :) !
@mohmedaminpatel4427
3 жыл бұрын
Wow should have waited before making the comment as you have explained it later in the video itself. Thank you Adam !
@AdamMarczakYT
3 жыл бұрын
Glad it helped, thanks! :)
@grzegorzz4025
3 жыл бұрын
Adam, great tutorial! Kudos!
@AdamMarczakYT
3 жыл бұрын
Glad you liked it!
@hafidazer1634
Жыл бұрын
I owe you my paycheck tbh 😅🤣
@ngophuthanh
2 жыл бұрын
Thank you, Adam. As always, you rock.
@sapecyrille5487
Жыл бұрын
Great! You are the best Adam.
@icici321
4 жыл бұрын
I am new to data and ETL stuff but your video's are too good. Excellent examples and very clear explanation so anyone can understand. Thanks very much.
Пікірлер: 305