Hussein, do you think about table rows when you go for a walk? 🤭
@hnasr
3 жыл бұрын
sometimes, I usually think about columns :D :D :D
@jonscott4775
3 жыл бұрын
Interesting that you raise this. Have you come across real-life use cases where this has actually caused a problem? I often use large text in postgres with the text search facilities and generally find it very good (good performance and easy to use).
@hnasr
3 жыл бұрын
Thanks for the feedback, I'm pretty sure the postgres team ran into several tests to ensure performance. Just curious , How large are your texts? are they 700Mb->1GB large? I would expect the performance to hit at that end and also matter if your box is IO-Bound or CPU bound. If there is enough memory in the box the text will be stored detoasted..
@aymanpatel5862
3 жыл бұрын
Is there a Postgres doc that explains this in writing?
@kevinshah_tv
3 жыл бұрын
obviously: wiki.postgresql.org/wiki/TOAST From the page: TOAST is a mechanism PostgreSQL uses to keep physical data rows from exceeding the size of a data block (typically 8KB). Postgres does not support physical rows that cross block boundaries, so the block size is a hard upper limit on row size. To allow user tables to have rows wider than this, the TOAST mechanism breaks up wide field values into smaller pieces, which are stored "out of line" in a TOAST table associated with the user table. Each table you create has its own associated (unique) TOAST table, which may or may not ever end up being used, depending on the size of rows you insert. All of this is transparent to the user, and enabled by default. When a row that is to be stored is "too wide" (the threshold for that is 2KB by default), the TOAST mechanism first attempts to compress any wide field values. If that isn't enough to get the row under 2KB, it breaks up the wide field values into chunks that get stored in the associated TOAST table. Each original field value is replaced by a small pointer that shows where to find this "out of line" data in the TOAST table. TOAST will attempt to squeeze the user-table row down to 2KB in this way, but as long as it can get below 8KB, that's good enough and the row can be stored successfully.
@DushyantDeshwal
3 жыл бұрын
Well, very helpful , i was setting up my local network remote server.
@ranjithsivanandham1927
3 жыл бұрын
Sir make a course on software architecture couldn't find anywhere else or you suggest the best course on software architecture somewhere either
@jworne79
3 жыл бұрын
Very well explained
@ripudamank
3 жыл бұрын
sir I recently made a simple location tracking app with node and mongo. I was confused about how to store the location(Lat, Long) in the database and that very frequently about multiple devices at the same time. I actually ended up storing them up in a text file in file system on server and wrote to the file in real time. Is this approach correct?
@benzobox2308
3 жыл бұрын
Definitely not. Store your LatLong in Mongo using a GeoSpatial index. You can then do specialized queries using the LatLong coordinates
@smrtysam
3 жыл бұрын
PostGIS
@sumitrawat2037
3 жыл бұрын
Does compression from Postgres side makes sure size wont exceed 8KB ?
@hnasr
3 жыл бұрын
correct that is the first line of defense to store data inline and compressed. but if that didn't help they push to the external table
@TomLisankie
3 жыл бұрын
Does anyone know the reasoning for why page size cannot be set to anything over 8 kb?
@hnasr
3 жыл бұрын
It can but you will have to recompile postgres I don’t believe its a config.
@TomLisankie
3 жыл бұрын
@@hnasr yes that makes sense, but my question is more about “why is the size so small in the first place? What is the reasoning?”
Пікірлер: 18