Recherche avancée

Médias (0)

Mot : - Tags -/signalement

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (38)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

Sur d’autres sites (6555)

  • Developing A Shader-Based Video Codec

    22 juin 2013, par Multimedia Mike — Outlandish Brainstorms

    Early last month, this thing called ORBX.js was in the news. It ostensibly has something to do with streaming video and codec technology, which naturally catches my interest. The hype was kicked off by Mozilla honcho Brendan Eich when he posted an article asserting that HD video decoding could be entirely performed in JavaScript. We’ve seen this kind of thing before using Broadway– an H.264 decoder implemented entirely in JS. But that exposes some very obvious limitations (notably CPU usage).

    But this new video codec promises 1080p HD playback directly in JavaScript which is a lofty claim. How could it possibly do this ? I got the impression that performance was achieved using WebGL, an extension which allows JavaScript access to accelerated 3D graphics hardware. Browsing through the conversations surrounding the ORBX.js announcement, I found this confirmation from Eich himself :

    You’re right that WebGL does heavy lifting.

    As of this writing, ORBX.js remains some kind of private tech demo. If there were a public demo available, it would necessarily be easy to reverse engineer the downloadable JavaScript decoder.

    But the announcement was enough to make me wonder how it could be possible to create a video codec which effectively leverages 3D hardware.

    Prior Art
    In theorizing about this, it continually occurs to me that I can’t possibly be the first person to attempt to do this (or the ORBX.js people, for that matter). In googling on the matter, I found various forums and Q&A posts where people asked if it were possible to, e.g., accelerate JPEG decoding and presentation using 3D hardware, with no answers. I also found a blog post which describes a plan to use 3D hardware to accelerate VP8 video decoding. It was a project done under the banner of Google’s Summer of Code in 2011, though I’m not sure which open source group mentored the effort. The project did not end up producing the shader-based VP8 codec originally chartered but mentions that “The ‘client side’ of the VP8 VDPAU implementation is working and is currently being reviewed by the libvdpau maintainers.” I’m not sure what that means. Perhaps it includes modifications to the public API that supports VP8, but is waiting for the underlying hardware to actually implement VP8 decoding blocks in hardware.

    What’s So Hard About This ?
    Video decoding is a computationally intensive task. GPUs are known to be really awesome at chewing through computationally intensive tasks. So why aren’t GPUs a natural fit for decoding video codecs ?

    Generally, it boils down to parallelism, or lack of opportunities thereof. GPUs are really good at doing the exact same operations over lots of data at once. The problem is that decoding compressed video usually requires multiple phases that cannot be parallelized, and the individual phases often cannot be parallelized. In strictly mathematical terms, a compressed data stream will need to be decoded by applying a function f(x) over each data element, x0 .. xn. However, the function relies on having applied the function to the previous data element, i.e. :

    f(xn) = f(f(xn-1))
    

    What happens when you try to parallelize such an algorithm ? Temporal rifts in the space/time continuum, if you’re in a Star Trek episode. If you’re in the real world, you’ll get incorrect, unusuable data as the parallel computation is seeded with a bunch of invalid data at multiple points (which is illustrated in some of the pictures in the aforementioned blog post about accelerated VP8).

    Example : JPEG
    Let’s take a very general look at the various stages involved in decoding the ubiquitous JPEG format :


    High level JPEG decoding flow

    What are the opportunities to parallelize these various phases ?

    • Huffman decoding (run length decoding and zig-zag reordering is assumed to be rolled into this phase) : not many opportunities for parallelizing the various Huffman formats out there, including this one. Decoding most Huffman streams is necessarily a sequential operation. I once hypothesized that it would be possible to engineer a codec to achieve some parallelism during the entropy decoding phase, and later found that On2′s VP8 codec employs the scheme. However, such a scheme is unlikely to break down to such a fine level that WebGL would require.
    • Reverse DC prediction : JPEG — and many other codecs — doesn’t store full DC coefficients. It stores differences in successive DC coefficients. Reversing this process can’t be parallelized. See the discussion in the previous section.
    • Dequantize coefficients : This could be very parallelized. It should be noted that software decoders often don’t dequantize all coefficients. Many coefficients are 0 and it’s a waste of a multiplication operation to dequantize. Thus, this phase is sometimes rolled into the Huffman decoding phase.
    • Invert discrete cosine transform : This seems like it could be highly parallelizable. I will be exploring this further in this post.
    • Convert YUV -> RGB for final display : This is a well-established use case for 3D acceleration.

    Crash Course in 3D Shaders and Humility
    So I wanted to see if I could accelerate some parts of JPEG decoding using something called shaders. I made an effort to understand 3D programming and its associated math throughout the 1990s but 3D technology left me behind a very long time ago while I got mixed up in this multimedia stuff. So I plowed through a few books concerning WebGL (thanks to my new Safari Books Online subscription). After I learned enough about WebGL/JS to be dangerous and just enough about shader programming to be absolutely lethal, I set out to try my hand at optimizing IDCT using shaders.

    Here’s my extremely high level (and probably hopelessly naive) view of the modern GPU shader programming model :


    Basic WebGL rendering pipeline

    The WebGL program written in JavaScript drives the show. It sends a set of vertices into the WebGL system and each vertex is processed through a vertex shader. Then, each pixel that falls within a set of vertices is sent through a fragment shader to compute the final pixel attributes (R, G, B, and alpha value). Another consideration is textures : This is data that the program uploads to GPU memory which can be accessed programmatically by the shaders).

    These shaders (vertex and fragment) are key to the GPU’s programmability. How are they programmed ? Using a special C-like shading language. Thought I : “C-like language ? I know C ! I should be able to master this in short order !” So I charged forward with my assumptions and proceeded to get smacked down repeatedly by the overall programming paradigm. I came to recognize this as a variation of the scientific method : Develop a hypothesis– in my case, a mental model of how the system works ; develop an experiment (short program) to prove or disprove the model ; realize something fundamental that I was overlooking ; formulate new hypothesis and repeat.

    First Approach : Vertex Workhorse
    My first pitch goes like this :

    • Upload DCT coefficients to GPU memory in the form of textures
    • Program a vertex mesh that encapsulates 16×16 macroblocks
    • Distribute the IDCT effort among multiple vertex shaders
    • Pass transformed Y, U, and V blocks to fragment shader which will convert the samples to RGB

    So the idea is that decoding of 16×16 macroblocks is parallelized. A macroblock embodies 6 blocks :


    JPEG macroblocks

    It would be nice to process one of these 6 blocks in each vertex. But that means drawing a square with 6 vertices. How do you do that ? I eventually realized that drawing a square with 6 vertices is the recommended method for drawing a square on 3D hardware. Using 2 triangles, each with 3 vertices (0, 1, 2 ; 3, 4, 5) :


    2 triangles make a square

    A vertex shader knows which (x, y) coordinates it has been assigned, so it could figure out which sections of coefficients it needs to access within the textures. But how would a vertex shader know which of the 6 blocks it should process ? Solution : Misappropriate the vertex’s z coordinate. It’s not used for anything else in this case.

    So I set all of that up. Then I hit a new roadblock : How to get the reconstructed Y, U, and V samples transported to the fragment shader ? I have found that communicating between shaders is quite difficult. Texture memory ? WebGL doesn’t allow shaders to write back to texture memory ; shaders can only read it. The standard way to communicate data from a vertex shader to a fragment shader is to declare variables as “varying”. Up until this point, I knew about varying variables but there was something I didn’t quite understand about them and it nagged at me : If 3 different executions of a vertex shader set 3 different values to a varying variable, what value is passed to the fragment shader ?

    It turns out that the varying variable varies, which means that the GPU passes interpolated values to each fragment shader invocation. This completely destroys this idea.

    Second Idea : Vertex Workhorse, Take 2
    The revised pitch is to work around the interpolation issue by just having each vertex shader invocation performs all 6 block transforms. That seems like a lot of redundant. However, I figured out that I can draw a square with only 4 vertices by arranging them in an ‘N’ pattern and asking WebGL to draw a TRIANGLE_STRIP instead of TRIANGLES. Now it’s only doing the 4x the extra work, and not 6x. GPUs are supposed to be great at this type of work, so it shouldn’t matter, right ?

    I wired up an experiment and then ran into a new problem : While I was able to transform a block (or at least pretend to), and load up a varying array (that wouldn’t vary since all vertex shaders wrote the same values) to transmit to the fragment shader, the fragment shader can’t access specific values within the varying block. To clarify, a WebGL shader can use a constant value — or a value that can be evaluated as a constant at compile time — to index into arrays ; a WebGL shader can not compute an index into an array. Per my reading, this is a WebGL security consideration and the limitation may not be present in other OpenGL(-ES) implementations.

    Not Giving Up Yet : Choking The Fragment Shader
    You might want to be sitting down for this pitch :

    • Vertex shader only interpolates texture coordinates to transmit to fragment shader
    • Fragment shader performs IDCT for a single Y sample, U sample, and V sample
    • Fragment shader converts YUV -> RGB

    Seems straightforward enough. However, that step concerning IDCT for Y, U, and V entails a gargantuan number of operations. When computing the IDCT for an entire block of samples, it’s possible to leverage a lot of redundancy in the math which equates to far fewer overall operations. If you absolutely have to compute each sample individually, for an 8×8 block, that requires 64 multiplication/accumulation (MAC) operations per sample. For 3 color planes, and including a few extra multiplications involved in the RGB conversion, that tallies up to about 200 MACs per pixel. Then there’s the fact that this approach means a 4x redundant operations on the color planes.

    It’s crazy, but I just want to see if it can be done. My approach is to pre-compute a pile of IDCT constants in the JavaScript and transmit them to the fragment shader via uniform variables. For a first order optimization, the IDCT constants are formatted as 4-element vectors. This allows computing 16 dot products rather than 64 individual multiplication/addition operations. Ideally, GPU hardware executes the dot products faster (and there is also the possibility of lining these calculations up as matrices).

    I can report that I actually got a sample correctly transformed using this approach. Just one sample, through. Then I ran into some new problems :

    Problem #1 : Computing sample #1 vs. sample #0 requires a different table of 64 IDCT constants. Okay, so create a long table of 64 * 64 IDCT constants. However, this suffers from the same problem as seen in the previous approach : I can’t dynamically compute the index into this array. What’s the alternative ? Maintain 64 separate named arrays and implement 64 branches, when branching of any kind is ill-advised in shader programming to begin with ? I started to go down this path until I ran into…

    Problem #2 : Shaders can only be so large. 64 * 64 floats (4 bytes each) requires 16 kbytes of data and this well exceeds the amount of shader storage that I can assume is allowed. That brings this path of exploration to a screeching halt.

    Further Brainstorming
    I suppose I could forgo pre-computing the constants and directly compute the IDCT for each sample which would entail lots more multiplications as well as 128 cosine calculations per sample (384 considering all 3 color planes). I’m a little stuck with the transform idea right now. Maybe there are some other transforms I could try.

    Another idea would be vector quantization. What little ORBX.js literature is available indicates that there is a method to allow real-time streaming but that it requires GPU assistance to yield enough horsepower to make it feasible. When I think of such severe asymmetry between compression and decompression, my mind drifts towards VQ algorithms. As I come to understand the benefits and limitations of GPU acceleration, I think I can envision a way that something similar to SVQ1, with its copious, hierarchical vector tables stored as textures, could be implemented using shaders.

    So far, this all pertains to intra-coded video frames. What about opportunities for inter-coded frames ? The only approach that I can envision here is to use WebGL’s readPixels() function to fetch the rasterized frame out of the GPU, and then upload it again as a new texture which a new frame processing pipeline could reference. Whether this idea is plausible would require some profiling.

    Using interframes in such a manner seems to imply that the entire codec would need to operate in RGB space and not YUV.

    Conclusions
    The people behind ORBX.js have apparently figured out a way to create a shader-based video codec. I have yet to even begin to reason out a plausible approach. However, I’m glad I did this exercise since I have finally broken through my ignorance regarding modern GPU shader programming. It’s nice to have a topic like multimedia that allows me a jumping-off point to explore other areas.

  • Laravel Process() with FFMPEG times out for any video that is even kind of large

    16 août 2024, par Hunter

    I am creating an application that lets users upload video files, and once they do it will convert them all to .mp4's (it does some other things, but since this first part of the code is breaking before it gets a chance to reach the other stuff, and the process for each is pretty similar, lets just discuss this issue).

    



    Once a user submits, it gets sent to GameController@store. Below is a code sample (the one that keeps getting sent back to me as problematic) from the controller :

    



    $process = new Process(env('APP_FFMPEG', 'ffmpeg') . ' -i ' . $fileURL . ' -f mp4 -vcodec libx264 -preset medium -profile:v main -c:a aac -movflags +faststart ' . $newFileName . ' -hide_banner');
              $process->setTimeout(360);
              $process->setIdleTimeout(60);
              $process->run();
              if (!$process->isSuccessful()) {
                throw new ProcessFailedException($process);
              }


    



    I am using the Symfony Process() class to run the FFMpeg conversion. The weird thing is, when I run this with a 1 or 2 second video, it goes without a hitch. This, to me, rules out issues like path problems, permission problems, etc. Multiple file types work with this, as long as they are very short. As soon as I put in a video that is even around 1min, it returns with this :

    



    The process "/usr/local/bin/ffmpeg -i http://localhost/storage/team_1_folder/game_26/team_1_game_26_vid_1.mp4 -f mp4 -vcodec libx264 -preset medium -profile:v main -c:a aac -movflags +faststart storage/team_1_folder/game_26/convertedVideo_1.mp4 -hide_banner" exceeded the timeout of 360 seconds.


    



    Now, notice that it says 'timeout of 360 seconds'. That is because I have also edited ALL of the possible timeout constraints, as well as the file size constraints that I can think of. Here is what I have done :

    



    -Changed php.ini (in my MAMP installation) values :
    
—max_execution_time = 300 (default was 30)
    
—memory_limit = 1280M (default was 128)
    
—post_max_size = 800M
    
—upload_max_filesize = 3200M
    
-max_input_time=300

    



    -In the GameController (code above)
    
—$process->setTimeout(360) ;
    
—$process->setTimeout(360) ;

    



    After all that, with a big file, the thing will just stall for the 6 minutes that I have set on the timeout thing, and then send an error. Is FFmpeg just stalling, or is this a process issue ? Who knows. Errors surrounding these things are just beautifully vague.

    



    I have tagged Laravel, PHP, and FFmpeg in hopes that someone will recognize this issue. I know that Process() is sending me the error, but since it is sending me a timeout, I don't know what is at fault.

    



    I'm not ruling anything out, so hit me with ideas, even crazy ones !

    



    Edit
Here is the output of the -report. I cut some of it off because the end part gets repetitive and goes for literally 1000 lines :

    



    ffmpeg started on 2019-06-24 at 18:50:23
Report written to "ffmpeg-20190624-185023.log"
Command line:
/usr/local/bin/ffmpeg -i http://localhost/storage/team_1_folder/game_31/team_1_game_31_vid_1.mp4 -f mp4 -vcodec libx264 -preset medium -profile:v main -c:a aac -movflags +faststart storage/team_1_folder/game_31/convertedVideo_1.mp4 -report
ffmpeg version 4.1.3 Copyright (c) 2000-2019 the FFmpeg developers
  built with Apple LLVM version 10.0.1 (clang-1001.0.46.4)
  configuration: --prefix=/usr/local/Cellar/ffmpeg/4.1.3_1 --enable-shared --enable-pthreads --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags='-I/Library/Java/JavaVirtualMachines/adoptopenjdk-11.0.2.jdk/Contents/Home/include -I/Library/Java/JavaVirtualMachines/adoptopenjdk-11.0.2.jdk/Contents/Home/include/darwin' --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libtesseract --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-videotoolbox --disable-libjack --disable-indev=jack --enable-libaom --enable-libsoxr
  libavutil      56. 22.100 / 56. 22.100
  libavcodec     58. 35.100 / 58. 35.100
  libavformat    58. 20.100 / 58. 20.100
  libavdevice    58.  5.100 / 58.  5.100
  libavfilter     7. 40.101 /  7. 40.101
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  3.100 /  5.  3.100
  libswresample   3.  3.100 /  3.  3.100
  libpostproc    55.  3.100 / 55.  3.100
Splitting the commandline.
Reading option '-i' ... matched as input url with argument 'http://localhost/storage/team_1_folder/game_31/team_1_game_31_vid_1.mp4'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'mp4'.
Reading option '-vcodec' ... matched as option 'vcodec' (force video codec ('copy' to copy stream)) with argument 'libx264'.
Reading option '-preset' ... matched as AVOption 'preset' with argument 'medium'.
Reading option '-profile:v' ... matched as option 'profile' (set profile) with argument 'main'.
Reading option '-c:a' ... matched as option 'c' (codec name) with argument 'aac'.
Reading option '-movflags' ... matched as AVOption 'movflags' with argument '+faststart'.
Reading option 'storage/team_1_folder/game_31/convertedVideo_1.mp4' ... matched as output url.
Reading option '-report' ... matched as option 'report' (generate a report) with argument '1'.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option report (generate a report) with argument 1.
Successfully parsed a group of options.
Parsing a group of options: input url http://localhost/storage/team_1_folder/game_31/team_1_game_31_vid_1.mp4.
Successfully parsed a group of options.
Opening an input file: http://localhost/storage/team_1_folder/game_31/team_1_game_31_vid_1.mp4.
[NULL @ 0x7f9ad480ae00] Opening 'http://localhost/storage/team_1_folder/game_31/team_1_game_31_vid_1.mp4' for reading
[http @ 0x7f9ad3c6b540] Setting default whitelist 'http,https,tls,rtp,tcp,udp,crypto,httpproxy'
[tcp @ 0x7f9ad3e00400] Original list of addresses:
[tcp @ 0x7f9ad3e00400] Address ::1 port 80
[tcp @ 0x7f9ad3e00400] Address 127.0.0.1 port 80
[tcp @ 0x7f9ad3e00400] Interleaved list of addresses:
[tcp @ 0x7f9ad3e00400] Address ::1 port 80
[tcp @ 0x7f9ad3e00400] Address 127.0.0.1 port 80
[tcp @ 0x7f9ad3e00400] Starting connection attempt to ::1 port 80
[tcp @ 0x7f9ad3e00400] Successfully connected to ::1 port 80
[http @ 0x7f9ad3c6b540] request: GET /storage/team_1_folder/game_31/team_1_game_31_vid_1.mp4 HTTP/1.1
User-Agent: Lavf/58.20.100
Accept: */*
Range: bytes=0-
Connection: close
Host: localhost
Icy-MetaData: 1


[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9ad480ae00] Format mov,mp4,m4a,3gp,3g2,mj2 probed with size=2048 and score=100
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9ad480ae00] ISO: File Type Major Brand: isom
[tcp @ 0x7f9ad3c6b840] Original list of addresses:
[tcp @ 0x7f9ad3c6b840] Address ::1 port 80
[tcp @ 0x7f9ad3c6b840] Address 127.0.0.1 port 80
[tcp @ 0x7f9ad3c6b840] Interleaved list of addresses:
[tcp @ 0x7f9ad3c6b840] Address ::1 port 80
[tcp @ 0x7f9ad3c6b840] Address 127.0.0.1 port 80
[tcp @ 0x7f9ad3c6b840] Starting connection attempt to ::1 port 80
[tcp @ 0x7f9ad3c6b840] Successfully connected to ::1 port 80
[http @ 0x7f9ad3c6b540] request: GET /storage/team_1_folder/game_31/team_1_game_31_vid_1.mp4 HTTP/1.1
User-Agent: Lavf/58.20.100
Accept: */*
Range: bytes=170169585-
Connection: close
Host: localhost
Icy-MetaData: 1


[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9ad480ae00] Unknown dref type 0x206c7275 size 12
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9ad480ae00] Processing st: 0, edit list 0 - media time: 0, duration: 10584600
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9ad480ae00] stts: 0 ctts: 0, ctts_index: 0, ctts_count: 10575
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9ad480ae00] stts: 1 ctts: 4002, ctts_index: 1, ctts_count: 10575
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9ad480ae00] stts: 1000 ctts: 0, ctts_index: 2, ctts_count: 10575
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9ad480ae00] stts: 2001 ctts: 0, ctts_index: 3, ctts_count: 10575
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9ad480ae00] stts: 3002 ctts: 0, ctts_index: 4, ctts_count: 10575
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9ad480ae00] stts: 4003 ctts: 4004, ctts_index: 5, ctts_count: 10575
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9ad480ae00] stts: 5004 ctts: 0, ctts_index: 6, ctts_count: 10575


    



    Edit 2
I also found this towards the bottom :

    



    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9ad480ae00] stts: 10582571 ctts: 0, ctts_index: 10573, ctts_count: 10575
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9ad480ae00] stts: 10583572 ctts: 1001, ctts_index: 10574, ctts_count: 10575
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9ad480ae00] Setting codecpar->delay to 1 for stream st: 0
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9ad480ae00] Unknown dref type 0x206c7275 size 12
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9ad480ae00] Processing st: 1, edit list 0 - media time: 0, duration: 15563816
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9ad480ae00] Before avformat_find_stream_info() pos: 170509020 bytes read:372203 seeks:1 nb_streams:2
[h264 @ 0x7f9ad4811200] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x7f9ad4811200] nal_unit_type: 8(PPS), nal_ref_idc: 3
[tcp @ 0x7f9ad3c72d40] Original list of addresses:
[tcp @ 0x7f9ad3c72d40] Address ::1 port 80
[tcp @ 0x7f9ad3c72d40] Address 127.0.0.1 port 80
[tcp @ 0x7f9ad3c72d40] Interleaved list of addresses:
[tcp @ 0x7f9ad3c72d40] Address ::1 port 80
[tcp @ 0x7f9ad3c72d40] Address 127.0.0.1 port 80
[tcp @ 0x7f9ad3c72d40] Starting connection attempt to ::1 port 80
[tcp @ 0x7f9ad3c72d40] Successfully connected to ::1 port 80
[http @ 0x7f9ad3c6b540] request: GET /storage/team_1_folder/game_31/team_1_game_31_vid_1.mp4 HTTP/1.1
User-Agent: Lavf/58.20.100
Accept: */*
Range: bytes=48-
Connection: close
Host: localhost
Icy-MetaData: 1


[h264 @ 0x7f9ad4811200] nal_unit_type: 5(IDR), nal_ref_idc: 1
[h264 @ 0x7f9ad4811200] Format yuv420p chosen by get_format().
[h264 @ 0x7f9ad4811200] Reinit context to 1920x1088, pix_fmt: yuv420p
[h264 @ 0x7f9ad4811200] no picture 
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9ad480ae00] All info found
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9ad480ae00] After avformat_find_stream_info() pos: 308454 bytes read:680609 seeks:2 frames:2
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'http://localhost/storage/team_1_folder/game_31/team_1_game_31_vid_1.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf56.40.101
  Duration: 00:05:52.92, start: 0.000000, bitrate: 3865 kb/s
    Stream #0:0(und), 1, 1/30000: Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080 [SAR 1:1 DAR 16:9], 3730 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default)
    Metadata:
      handler_name    : VideoHandler
    Stream #0:1(und), 1, 1/44100: Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 127 kb/s (default)
    Metadata:
      handler_name    : SoundHandler
Successfully opened the file.
Parsing a group of options: output url storage/team_1_folder/game_31/convertedVideo_1.mp4.
Applying option f (force format) with argument mp4.
Applying option vcodec (force video codec ('copy' to copy stream)) with argument libx264.
Applying option profile:v (set profile) with argument main.
Applying option c:a (codec name) with argument aac.
Successfully parsed a group of options.
Opening an output file: storage/team_1_folder/game_31/convertedVideo_1.mp4.
[file @ 0x7f9ad3f02740] Setting default whitelist 'file,crypto'
Successfully opened the file.
detected 4 logical cores
[h264 @ 0x7f9ad5813a00] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x7f9ad5813a00] nal_unit_type: 8(PPS), nal_ref_idc: 3
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
  Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
[graph_1_in_0_1 @ 0x7f9ad3dbcc00] Setting 'time_base' to value '1/44100'
[graph_1_in_0_1 @ 0x7f9ad3dbcc00] Setting 'sample_rate' to value '44100'
[graph_1_in_0_1 @ 0x7f9ad3dbcc00] Setting 'sample_fmt' to value 'fltp'
[graph_1_in_0_1 @ 0x7f9ad3dbcc00] Setting 'channel_layout' to value '0x3'
[graph_1_in_0_1 @ 0x7f9ad3dbcc00] tb:1/44100 samplefmt:fltp samplerate:44100 chlayout:0x3
[format_out_0_1 @ 0x7f9ad3c71040] Setting 'sample_fmts' to value 'fltp'
[format_out_0_1 @ 0x7f9ad3c71040] Setting 'sample_rates' to value '96000|88200|64000|48000|44100|32000|24000|22050|16000|12000|11025|8000|7350'
[AVFilterGraph @ 0x7f9ad3e00440] query_formats: 4 queried, 9 merged, 0 already done, 0 delayed
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
[h264 @ 0x7f9ad5813a00] nal_unit_type: 5(IDR), nal_ref_idc: 1
[h264 @ 0x7f9ad5813a00] Format yuv420p chosen by get_format().
[h264 @ 0x7f9ad5813a00] Reinit context to 1920x1088, pix_fmt: yuv420p
[h264 @ 0x7f9ad5813a00] no picture 
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
[h264 @ 0x7f9ad5804600] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 1
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
[h264 @ 0x7f9ad5804c00] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 0
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
[h264 @ 0x7f9ad5805200] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 0
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
[h264 @ 0x7f9ad5805800] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 0
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
[h264 @ 0x7f9ad5813a00] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 1
[graph 0 input from stream 0:0 @ 0x7f9ad3dbfb80] Setting 'video_size' to value '1920x1080'
[graph 0 input from stream 0:0 @ 0x7f9ad3dbfb80] Setting 'pix_fmt' to value '0'
[graph 0 input from stream 0:0 @ 0x7f9ad3dbfb80] Setting 'time_base' to value '1/30000'
[graph 0 input from stream 0:0 @ 0x7f9ad3dbfb80] Setting 'pixel_aspect' to value '1/1'
[graph 0 input from stream 0:0 @ 0x7f9ad3dbfb80] Setting 'sws_param' to value 'flags=2'
[graph 0 input from stream 0:0 @ 0x7f9ad3dbfb80] Setting 'frame_rate' to value '30000/1001'
[graph 0 input from stream 0:0 @ 0x7f9ad3dbfb80] w:1920 h:1080 pixfmt:yuv420p tb:1/30000 fr:30000/1001 sar:1/1 sws_param:flags=2
[format @ 0x7f9ad3dc0100] Setting 'pix_fmts' to value 'yuv420p|yuvj420p|yuv422p|yuvj422p|yuv444p|yuvj444p|nv12|nv16|nv21|yuv420p10le|yuv422p10le|yuv444p10le|nv20le'
[AVFilterGraph @ 0x7f9ad3c6e880] query_formats: 4 queried, 3 merged, 0 already done, 0 delayed
[libx264 @ 0x7f9ad5811200] using mv_range_thread = 88
[libx264 @ 0x7f9ad5811200] using SAR=1/1
[libx264 @ 0x7f9ad5811200] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x7f9ad5811200] profile Main, level 4.0
[libx264 @ 0x7f9ad5811200] 264 - core 155 r2917 0a84d98 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x1:0x111 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'storage/team_1_folder/game_31/convertedVideo_1.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.20.100
    Stream #0:0(und), 0, 1/30000: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], q=-1--1, 29.97 fps, 30k tbn, 29.97 tbc (default)
    Metadata:
      handler_name    : VideoHandler
      encoder         : Lavc58.35.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
    Stream #0:1(und), 0, 1/44100: Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
    Metadata:
      handler_name    : SoundHandler
      encoder         : Lavc58.35.100 aac
Clipping frame in rate conversion by 0.000008
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
[h264 @ 0x7f9ad5804600] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 0
Clipping frame in rate conversion by 0.000999
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
[h264 @ 0x7f9ad5804c00] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 0
Clipping frame in rate conversion by 0.000999
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
Clipping frame in rate conversion by 0.000999
[h264 @ 0x7f9ad5805200] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 0
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
Clipping frame in rate conversion by 0.000999
[h264 @ 0x7f9ad5805800] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 1
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
Clipping frame in rate conversion by 0.000999
[h264 @ 0x7f9ad5813a00] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 0
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
Clipping frame in rate conversion by 0.000999
[h264 @ 0x7f9ad5804600] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 0
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
Clipping frame in rate conversion by 0.000999
[h264 @ 0x7f9ad5804c00] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 0
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
Clipping frame in rate conversion by 0.000999
[h264 @ 0x7f9ad5805200] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 1
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
Clipping frame in rate conversion by 0.000999
[h264 @ 0x7f9ad5805800] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 0
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
Clipping frame in rate conversion by 0.000999
[h264 @ 0x7f9ad5813a00] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 0
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
Clipping frame in rate conversion by 0.000999
[h264 @ 0x7f9ad5804600] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 0
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
Clipping frame in rate conversion by 0.000999
[h264 @ 0x7f9ad5804c00] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 1
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
Clipping frame in rate conversion by 0.000999
[h264 @ 0x7f9ad5805200] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 0
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
Clipping frame in rate conversion by 0.000999
[h264 @ 0x7f9ad5805800] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 0
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
Clipping frame in rate conversion by 0.000999
[h264 @ 0x7f9ad5813a00] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 0
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
Clipping frame in rate conversion by 0.000999
[h264 @ 0x7f9ad5804600] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 1
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
Clipping frame in rate conversion by 0.000999
[h264 @ 0x7f9ad5804c00] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 0
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
Clipping frame in rate conversion by 0.000999
[h264 @ 0x7f9ad5805200] nal_unit_type: 1(Coded slice of a non-IDR picture), nal_ref_idc: 0
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
Clipping frame in rate conversion by 0.000999


    


  • What is Audience Segmentation ? The 5 Main Types & Examples

    16 novembre 2023, par Erin — Analytics Tips

    The days of mass marketing with the same message for millions are long gone. Today, savvy marketers instead focus on delivering the most relevant message to the right person at the right time.

    They do this at scale by segmenting their audiences based on various data points. This isn’t an easy process because there are many types of audience segmentation. If you take the wrong approach, you risk delivering irrelevant messages to your audience — or breaking their trust with poor data management.

    In this article, we’ll break down the most common types of audience segmentation, share examples highlighting their usefulness and cover how you can segment campaigns without breaking data regulations.

    What is audience segmentation ?

    Audience segmentation is when you divide your audience into multiple smaller specific audiences based on various factors. The goal is to deliver a more targeted marketing message or to glean unique insights from analytics.

    It can be as broad as dividing a marketing campaign by location or as specific as separating audiences by their interests, hobbies and behaviour.

    Illustration of basic audience segmentation

    Audience segmentation inherently makes a lot of sense. Consider this : an urban office worker and a rural farmer have vastly different needs. By targeting your marketing efforts towards agriculture workers in rural areas, you’re honing in on a group more likely to be interested in farm equipment. 

    Audience segmentation has existed since the beginning of marketing. Advertisers used to select magazines and placements based on who typically read them. They would run a golf club ad in a golf magazine, not in the national newspaper.

    How narrow you can make your audience segments by leveraging multiple data points has changed.

    Why audience segmentation matters

    In a survey by McKinsey, 71% of consumers said they expected personalisation, and 76% get frustrated when a vendor doesn’t deliver.

    Illustrated statistics that show the importance of personalisation

    These numbers reflect expectations from consumers who have actively engaged with a brand — created an account, signed up for an email list or purchased a product.

    They expect you to take that data and give them relevant product recommendations — like a shoe polishing kit if you bought nice leather loafers.

    If you don’t do any sort of audience segmentation, you’re likely to frustrate your customers with post-sale campaigns. If, for example, you just send the same follow-up email to all customers, you’d damage many relationships. Some might ask : “What ? Why would you think I need that ?” Then they’d promptly opt out of your email marketing campaigns.

    To avoid that, you need to segment your audience so you can deliver relevant content at all stages of the customer journey.

    5 key types of audience segmentation

    To help you deliver the right content to the right person or identify crucial insights in analytics, you can use five types of audience segmentation : demographic, behavioural, psychographic, technographic and transactional.

    Diagram of the main types of audience segmentation

    Demographic segmentation 

    Demographic segmentation is when you segment a larger audience based on demographic data points like location, age or other factors.

    The most basic demographic segmentation factor is location, which is easy to leverage in marketing efforts. For example, geographic segmentation can use IP addresses and separate marketing efforts by country. 

    But more advanced demographic data points are becoming increasingly sensitive to handle. Especially in Europe, GDPR makes advanced demographics a more tentative subject. Using age, education level and employment to target marketing campaigns is possible. But you need to navigate this terrain thoughtfully and responsibly, ensuring meticulous adherence to privacy regulations.

    Potential data points :

    • Location
    • Age
    • Marital status
    • Income
    • Employment 
    • Education

    Example of effective demographic segmentation :

    A clothing brand targeting diverse locations needs to account for the varying weather conditions. In colder regions, showcasing winter collections or insulated clothing might resonate more with the audience. Conversely, in warmer climates, promoting lightweight or summer attire could be more effective. 

    Here are two ads run by North Face on Facebook and Instagram to different audiences to highlight different collections :

    Each collection is featured differently and uses a different approach with its copy and even the media. With social media ads, targeting people based on advanced demographics is simple enough — you can just single out the factors when making your campaign. But if you don’t want to rely on these data-mining companies, that doesn’t mean you have no options for segmentation.

    Consider allowing people to self-select their interests or preferences by incorporating a short survey within your email sign-up form. This simple addition can enhance engagement, decrease bounce rates, and ultimately improve conversion rates, offering valuable insights into audience preferences.

    This is a great way to segment ethically and without the need of data-mining companies.

    Behavioural segmentation

    Behavioural segmentation segments audiences based on their interaction with your website or app.

    You use various data points to segment your target audience based on their actions.

    Potential data points :

    • Page visits
    • Referral source
    • Clicks
    • Downloads
    • Video plays
    • Goal completion (e.g., signing up for a newsletter or purchasing a product)

    Example of using behavioural segmentation to improve campaign efficiency :

    One effective method involves using a web analytics tool such as Matomo to uncover patterns. By segmenting actions like specific clicks and downloads, pinpoint valuable trends—identifying actions that significantly enhance visitor conversions. 

    Example of a segmented behavioral analysis in Matomo

    For instance, if a case study video substantially boosts conversion rates, elevate its prominence to capitalise on this success.

    Then, you can set up a conditional CTA within the video player. Make it pop up after the user has watched the entire video. Use a specific form and sign them up to a specific segment for each case study. This way, you know the prospect’s ideal use case without surveying them.

    This is an example of behavioural segmentation that doesn’t rely on third-party cookies.

    Psychographic segmentation

    Psychographic segmentation is when you segment audiences based on your interpretation of their personality or preferences.

    Potential data points :

    • Social media patterns
    • Follows
    • Hobbies
    • Interests

    Example of effective psychographic segmentation :

    Here, Adidas segments its audience based on whether they like cycling or rugby. It makes no sense to show a rugby ad to someone who’s into cycling and vice versa. But to rugby athletes, the ad is very relevant.

    If you want to avoid social platforms, you can use surveys about hobbies and interests to segment your target audience in an ethical way.

    Technographic segmentation

    Technographic segmentation is when you single out specific parts of your audience based on which hardware or software they use.

    Potential data points :

    • Type of device used
    • Device model or brand
    • Browser used

    Example of segmenting by device type to improve user experience :

    Upon noticing a considerable influx of tablet users accessing their platform, a leading news outlet decided to optimise their tablet browsing experience. They overhauled the website interface, focusing on smoother navigation and better readability for tablet users. These changes offered tablet users a seamless and enjoyable reading experience tailored precisely to their device.

    Transactional segmentation

    Transactional segmentation is when you use your customers’ purchase history to better target your marketing message to their needs.

    When consumers prefer personalisation, they typically mean based on their actual transactions, not their social media profiles.

    Potential data points :

    • Average order value
    • Product categories purchased within X months
    • X days since the last purchase of a consumable product

    Example of effective transactional segmentation :

    A pet supply store identifies a segment of customers consistently purchasing cat food but not other pet products. They create targeted email campaigns offering discounts or loyalty rewards specifically for cat-related items to encourage repeat purchases within this segment.

    If you want to improve customer loyalty and increase revenue, the last thing you should do is send generic marketing emails. Relevant product recommendations or coupons are the best way to use transactional segmentation.

    B2B-specific : Firmographic segmentation

    Beyond the five main segmentation types, B2B marketers often use “firmographic” factors when segmenting their campaigns. It’s a way to segment campaigns that go beyond the considerations of the individual.

    Potential data points :

    • Company size
    • Number of employees
    • Company industry
    • Geographic location (office)

    Example of effective firmographic segmentation :

    Companies of different sizes won’t need the same solution — so segmenting leads by company size is one of the most common and effective examples of B2B audience segmentation.

    The difference here is that B2B campaigns are often segmented through manual research. With an account-based marketing approach, you start by researching your potential customers. You then separate the target audience into smaller segments (or even a one-to-one campaign).

    Start segmenting and analysing your audience more deeply with Matomo

    Segmentation is a great place to start if you want to level up your marketing efforts. Modern consumers expect to get relevant content, and you must give it to them.

    But doing so in a privacy-sensitive way is not always easy. You need the right approach to segment your customer base without alienating them or breaking regulations.

    That’s where Matomo comes in. Matomo champions privacy compliance while offering comprehensive insights and segmentation capabilities. With robust privacy controls and cookieless configuration, it ensures GDPR and other regulations are met, empowering data-driven decisions without compromising user privacy.

    Take advantage of our 21-day free trial to get insights that can help you improve your marketing strategy and better reach your target audience. No credit card required.