
Recherche avancée
Autres articles (108)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
Sur d’autres sites (14112)
-
Files created with "ffmpeg hevc_nvenc" do not play on TV. (with video codec SDK 9.1 of nvidia)
29 janvier 2020, par DashhhProblem
- Files created with hevc_nvenc do not play on TV. (samsung smart tv, model unknown)
Related to my ffmpeg build is below.
FFmpeg build conf
$ ffmpeg -buildconf
--enable-cuda
--enable-cuvid
--enable-nvenc
--enable-nonfree
--enable-libnpp
--extra-cflags=-I/path/cuda/include
--extra-ldflags=-L/path/cuda/lib64
--prefix=/prefix/ffmpeg_build
--pkg-config-flags=--static
--extra-libs='-lpthread -lm'
--extra-cflags=-I/prefix/ffmpeg_build/include
--extra-ldflags=-L/prefix/ffmpeg_build/lib
--enable-gpl
--enable-nonfree
--enable-version3
--disable-stripping
--enable-avisynth
--enable-libass
--enable-libfontconfig
--enable-libfreetype
--enable-libfribidi
--enable-libgme
--enable-libgsm
--enable-librubberband
--enable-libshine
--enable-libsnappy
--enable-libssh
--enable-libtwolame
--enable-libwavpack
--enable-libzvbi
--enable-openal
--enable-sdl2
--enable-libdrm
--enable-frei0r
--enable-ladspa
--enable-libpulse
--enable-libsoxr
--enable-libspeex
--enable-avfilter
--enable-postproc
--enable-pthreads
--enable-libfdk-aac
--enable-libmp3lame
--enable-libopus
--enable-libtheora
--enable-libvorbis
--enable-libvpx
--enable-libx264
--enable-libx265
--disable-ffplay
--enable-libopenjpeg
--enable-libwebp
--enable-libxvid
--enable-libvidstab
--enable-libopenh264
--enable-zlib
--enable-opensslffmpeg Command
- Command about FFmpeg encoding
ffmpeg -ss 1800 -vsync 0 -hwaccel cuvid -hwaccel_device 0 \
-c:v h264_cuvid -i /data/input.mp4 -t 10 \
-filter_complex "\
[0:v]hwdownload,format=nv12,format=yuv420p,\
scale=iw*2:ih*2" -gpu 0 -c:v hevc_nvenc -pix_fmt yuv444p16le -preset slow -rc cbr_hq -b:v 5000k -maxrate 7000k -bufsize 1000k -acodec aac -ac 2 -dts_delta_threshold 1000 -ab 128k -flags global_header ./makevideo_nvenc_hevc.mp4Full log about This Command - check this full log
The reason for adding "-color_ " in the command is as follows.
- HDR video after creating bt2020 + smpte2084 video using nvidia hardware accelerator. (I’m studying to make HDR videos. I’m not sure if this is right.)
How can I make a video using ffmpeg hevc_nvenc and have it play on TV ?
Things i’ve done
Here’s what I’ve researched about why it doesn’t work.
The header information is not properly included in the resulting video file. So I used a program called nvhsp to add SEI and VUI information inside the video. See below for the commands and logs used.
nvhsp
is open source for writing VUI and SEI bitstrings in raw video. nvhsp link# make rawvideo for nvhsp
$ ffmpeg -vsync 0 -hwaccel cuvid -hwaccel_device 0 -c:v h264_cuvid \
-i /data/input.mp4 -t 10 \
-filter_complex "[0:v]hwdownload,format=nv12,\
format=yuv420p,scale=iw*2:ih*2" \
-gpu 0 -c:v hevc_nvenc -f rawvideo output_for_nvhsp.265
# use nvhsp
$ python nvhsp.py ./output_for_nvhsp.265 -colorprim bt2020 \
-transfer smpte-st-2084 -colormatrix bt2020nc \
-maxcll "1000,300" -videoformat ntsc -full_range tv \
-masterdisplay "G (13250,34500) B (7500,3000 ) R (34000,16000) WP (15635,16450) L (10000000,1)" \
./after_nvhsp_proc_output.265
Parsing the infile:
==========================
Prepending SEI data
Starting new SEI NALu ...
SEI message with MaxCLL = 1000 and MaxFall = 300 created in SEI NAL
SEI message Mastering Display Data G (13250,34500) B (7500,3000) R (34000,16000) WP (15635,16450) L (10000000,1) created in SEI NAL
Looking for SPS ......... [232, 22703552]
SPS_Nals_addresses [232, 22703552]
SPS NAL Size 488
Starting reading SPS NAL contents
Reading of SPS NAL finished. Read 448 of SPS NALu data.
Making modified SPS NALu ...
Made modified SPS NALu-OK
New SEI prepended
Writing new stream ...
Progress: 100%
=====================
Done!
File nvhsp_after_output.mp4 created.
# after process
$ ffmpeg -y -f rawvideo -r 25 -s 3840x2160 -pix_fmt yuv444p16le -color_primaries bt2020 -color_trc smpte2084 -colorspace bt2020nc -color_range tv -i ./1/after_nvhsp_proc_output.265 -vcodec copy ./1/result.mp4 -hide_banner
Truncating packet of size 49766400 to 3260044
[rawvideo @ 0x40a6400] Estimating duration from bitrate, this may be inaccurate
Input #0, rawvideo, from './1/nvhsp_after_output.265':
Duration: N/A, start: 0.000000, bitrate: 9953280 kb/s
Stream #0:0: Video: rawvideo (Y3[0][16] / 0x10003359), yuv444p16le(tv, bt2020nc/bt2020/smpte2084), 3840x2160, 9953280 kb/s, 25 tbr, 25 tbn, 25 tbc
[mp4 @ 0x40b0440] Could not find tag for codec rawvideo in stream #0, codec not currently supported in container
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Last message repeated 1 timesGoal
-
I want to generate matadata normally when encoding a video through hevc_nvenc.
-
I want to create a video through hevc_nvenc and play HDR Video on smart tv with 10bit color depth support.
Additional
-
Is it normal for ffmpeg hevc_nvenc not to generate metadata in the resulting video file ? or is it a bug ?
-
Please refer to the image below. (*’알 수 없음’ meaning ’unknown’)
- if you need more detail file info, check this Gist Link (by ffprobe)
- if you need more detail file info, check this Gist Link (by ffprobe)
-
However, if you encode a file in libx265, the attribute information is entered correctly as shown below.
- if you need more detail file info, check this Gist Link
- if you need more detail file info, check this Gist Link
However, when using hevc_nvenc, all information is missing.
- i used option
-show_streams -show_programs -show_format -show_data -of json -show_frames -show_log 56
at ffprobe
- Files created with hevc_nvenc do not play on TV. (samsung smart tv, model unknown)
-
ffmpeg pipe process ends right after writing first buffer data to input stream and does not keep running
6 mai, par Taketo MatsunagaI have been trying to convert 16bit PCM (s16le) audio data to webm using ffmpeg in C#.
But the process ends right after the writing the first buffer data to standard input.
I has exited with the status 0, meaning success. But do not know why....
Could anyone tell me why ?


I apprecite it if you could support me.


public class SpeechService : ISpeechService
 {
 
 /// <summary>
 /// Defines the _audioInputStream
 /// </summary>
 private readonly MemoryStream _audioInputStream = new MemoryStream();

 public async Task SendPcmAsWebmViaWebSocketAsync(
 MemoryStream pcmAudioStream,
 int sampleRate,
 int channels) 
 {
 string inputFormat = "s16le";

 var ffmpegProcessInfo = new ProcessStartInfo
 {
 FileName = _ffmpegPath,
 Arguments =
 $"-f {inputFormat} -ar {sampleRate} -ac {channels} -i pipe:0 " +
 $"-f webm pipe:1",
 RedirectStandardInput = true,
 RedirectStandardOutput = true,
 RedirectStandardError = true,
 UseShellExecute = false,
 CreateNoWindow = true,
 };

 _ffmpegProcess = new Process { StartInfo = ffmpegProcessInfo };

 Console.WriteLine("Starting FFmpeg process...");
 try
 {

 if (!await Task.Run(() => _ffmpegProcess.Start()))
 {
 Console.Error.WriteLine("Failed to start FFmpeg process.");
 return;
 }
 Console.WriteLine("FFmpeg process started.");

 }
 catch (Exception ex)
 {
 Console.Error.WriteLine($"Error starting FFmpeg process: {ex.Message}");
 throw;
 }

 var encodeAndSendTask = Task.Run(async () =>
 {
 try
 {
 using var ffmpegOutputStream = _ffmpegProcess.StandardOutput.BaseStream;
 byte[] buffer = new byte[8192]; // Temporary buffer to read data
 byte[] sendBuffer = new byte[8192]; // Buffer to accumulate data for sending
 int sendBufferIndex = 0; // Tracks the current size of sendBuffer
 int bytesRead;

 Console.WriteLine("Reading WebM output from FFmpeg and sending via WebSocket...");
 while (true)
 {
 if ((bytesRead = await ffmpegOutputStream.ReadAsync(buffer, 0, buffer.Length)) > 0)
 {
 // Copy data to sendBuffer
 Array.Copy(buffer, 0, sendBuffer, sendBufferIndex, bytesRead);
 sendBufferIndex += bytesRead;

 // If sendBuffer is full, send it via WebSocket
 if (sendBufferIndex >= sendBuffer.Length)
 {
 var segment = new ArraySegment<byte>(sendBuffer, 0, sendBuffer.Length);
 _ws.SendMessage(segment);
 sendBufferIndex = 0; // Reset the index after sending
 }
 }
 }
 }
 catch (OperationCanceledException)
 {
 Console.WriteLine("Encode/Send operation cancelled.");
 }
 catch (IOException ex) when (ex.InnerException is ObjectDisposedException)
 {
 Console.WriteLine("Stream was closed, likely due to process exit or cancellation.");
 }
 catch (Exception ex)
 {
 Console.Error.WriteLine($"Error during encoding/sending: {ex}");
 }
 });

 var errorReadTask = Task.Run(async () =>
 {
 Console.WriteLine("Starting to read FFmpeg stderr...");
 using var errorReader = _ffmpegProcess.StandardError;
 try
 {
 string? line;
 while ((line = await errorReader.ReadLineAsync()) != null) 
 {
 Console.WriteLine($"[FFmpeg stderr] {line}");
 }
 }
 catch (OperationCanceledException) { Console.WriteLine("FFmpeg stderr reading cancelled."); }
 catch (TimeoutException) { Console.WriteLine("FFmpeg stderr reading timed out (due to cancellation)."); }
 catch (Exception ex) { Console.Error.WriteLine($"Error reading FFmpeg stderr: {ex.Message}"); }
 Console.WriteLine("Finished reading FFmpeg stderr.");
 });

 }

 public async Task AppendAudioBuffer(AudioMediaBuffer audioBuffer)
 {
 try
 {
 // audio for a 1:1 call
 var bufferLength = audioBuffer.Length;
 if (bufferLength > 0)
 {
 var buffer = new byte[bufferLength];
 Marshal.Copy(audioBuffer.Data, buffer, 0, (int)bufferLength);

 _logger.Info("_ffmpegProcess.HasExited:" + _ffmpegProcess.HasExited);
 using var ffmpegInputStream = _ffmpegProcess.StandardInput.BaseStream;
 await ffmpegInputStream.WriteAsync(buffer, 0, buffer.Length);
 await ffmpegInputStream.FlushAsync(); // バッファをフラッシュ
 _logger.Info("Wrote buffer data.");

 }
 }
 catch (Exception e)
 {
 _logger.Error(e, "Exception happend writing to input stream");
 }
 }

</byte>


Starting FFmpeg process...
FFmpeg process started.
Starting to read FFmpeg stderr...
Reading WebM output from FFmpeg and sending via WebSocket...
[FFmpeg stderr] ffmpeg version 7.1.1-essentials_build-www.gyan.dev Copyright (c) 2000-2025 the FFmpeg developers
[FFmpeg stderr] built with gcc 14.2.0 (Rev1, Built by MSYS2 project)
[FFmpeg stderr] configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-zlib --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-sdl2 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libaom --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-dxva2 --enable-d3d11va --enable-d3d12va --enable-ffnvcodec --enable-libvpl --enable-nvdec --enable-nvenc --enable-vaapi --enable-libgme --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libtheora --enable-libvo-amrwbenc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-librubberband
[FFmpeg stderr] libavutil 59. 39.100 / 59. 39.100
[FFmpeg stderr] libavcodec 61. 19.101 / 61. 19.101
[FFmpeg stderr] libavformat 61. 7.100 / 61. 7.100
[FFmpeg stderr] libavdevice 61. 3.100 / 61. 3.100
[FFmpeg stderr] libavfilter 10. 4.100 / 10. 4.100
[FFmpeg stderr] libswscale 8. 3.100 / 8. 3.100
[FFmpeg stderr] libswresample 5. 3.100 / 5. 3.100
[FFmpeg stderr] libpostproc 58. 3.100 / 58. 3.100

[2025-05-06 15:44:43,598][INFO][XbLogger.cs:85] _ffmpegProcess.HasExited:False
[2025-05-06 15:44:43,613][INFO][XbLogger.cs:85] Wrote buffer data.
[2025-05-06 15:44:43,613][INFO][XbLogger.cs:85] Wrote buffer data.
[FFmpeg stderr] [aist#0:0/pcm_s16le @ 0000025ec8d36040] Guessed Channel Layout: mono
[FFmpeg stderr] Input #0, s16le, from 'pipe:0':
[FFmpeg stderr] Duration: N/A, bitrate: 256 kb/s
[FFmpeg stderr] Stream #0:0: Audio: pcm_s16le, 16000 Hz, mono, s16, 256 kb/s
[FFmpeg stderr] Stream mapping:
[FFmpeg stderr] Stream #0:0 -> #0:0 (pcm_s16le (native) -> opus (libopus))
[FFmpeg stderr] [libopus @ 0000025ec8d317c0] No bit rate set. Defaulting to 64000 bps.
[FFmpeg stderr] Output #0, webm, to 'pipe:1':
[FFmpeg stderr] Metadata:
[FFmpeg stderr] encoder : Lavf61.7.100
[FFmpeg stderr] Stream #0:0: Audio: opus, 16000 Hz, mono, s16, 64 kb/s
[FFmpeg stderr] Metadata:
[FFmpeg stderr] encoder : Lavc61.19.101 libopus
[FFmpeg stderr] [out#0/webm @ 0000025ec8d36200] video:0KiB audio:1KiB subtitle:0KiB other streams:0KiB global headers:0KiB muxing overhead: 67.493113%
[FFmpeg stderr] size= 1KiB time=00:00:00.04 bitrate= 243.2kbits/s speed=2.81x
Finished reading FFmpeg stderr.
[2025-05-06 15:44:44,101][INFO][XbLogger.cs:85] _ffmpegProcess.HasExited:True
[2025-05-06 15:44:44,132][ERROR][XbLogger.cs:67] Exception happend writing to input stream
System.ObjectDisposedException: Cannot access a closed file.
 at System.IO.FileStream.WriteAsync(Byte[] buffer, Int32 offset, Int32 count, CancellationToken cancellationToken)
 at System.IO.Stream.WriteAsync(Byte[] buffer, Int32 offset, Int32 count)
 at EchoBot.Media.SpeechService.AppendAudioBuffer(AudioMediaBuffer audioBuffer) in C:\Users\tm068\Documents\workspace\myprj\xbridge-teams-bot\src\EchoBot\Media\SpeechService.cs:line 242



I am expecting the ffmpeg process keep running.


-
How to make Matomo GDPR compliant in 12 steps
3 avril 2018, par InnoCraftImportant note : this blog post has been written by digital analysts, not lawyers. The purpose of this article is to briefly show you where Matomo is entering into play within the GDPR process. This work comes from our interpretation of the UK privacy commission : ICO. It cannot be considered as professional legal advice. So as GDPR, this information is subject to change. We strongly advise you to have a look at the different privacy authorities in order to have up to date information.
The General Data Protection Regulation (EU) 2016/679, also referred to RGPD in French, Datenschutz-Grundverordnung, DS-GVO in German, is a regulation on data protection and privacy for all individuals within the European Union. It concerns organizations worldwide dealing with EU citizens and will come into force on the 25th May 2018.
The GDPR applies to ‘personal data’ meaning any information relating to an identifiable person who can be directly or indirectly identified in particular by reference to an identifier. It includes cookies, IP addresses, User ID, location, and any other data you may have collected.
We will list below the 12 steps recommended by the UK privacy commissioner in order to be GDPR compliant and what you need to do for each step.
The 12 steps of GDPR compliance according to ICO and how it fit with Matomo
As mentioned in one of our previous blog post about GDPR, if you are not collecting any personal data with Matomo, then you are not concerned about what is written below.
If you are processing personal data in any way, here are the 12 steps to follow along with some recommendations on how to be GDPR compliant with Matomo :
1 – Awareness
Make sure that people within your organization know that you are using Matomo in order to analyze traffic on the website/app. If needed, send them the link to the “What is Matomo ?” page.
2 – Information you hold
List all the personal data you are processing with Matomo within your record of processing activities. We are personally using the template provided by ICO which is composed of a set of 30 questions you need to answer regarding your use of Matomo. We have published an article which walks you through the list of questions specifically in the use case of Matomo Analytics. Please be aware that personal data may be also tracked in non-obvious ways for example as part of page URLs or page titles.
3 – Communicating privacy information
a – Add a privacy notice
Add a privacy notice wherever you are using Matomo in order to collect personal data. Please refer to the ICO documentation in order to learn how to write a privacy notice. You can learn more in our article about creating your privacy notice for Matomo Analytics. Make sure that a privacy policy link is always available on your website or app.
b – Add Matomo to your privacy policy page
Add Matomo to the list of technologies you are using on your privacy policy page and add all the necessary information to it as requested in the following checklist. To learn more check out our article about Privacy policy.
4 – Individuals’ rights
Make sure that your Matomo installation respects all the individuals’ rights. To make it short, you will need to know the features in Matomo that you need to use to respect user rights (right of access, right of rectification, right of erasure…). These features are available starting in Matomo 3.5.0 released on May 8th : GDPR tools for Matomo (User guide).
5 – Subject access requests
Make sure that you are able to answer an access request from a data subject for Matomo. For example, when a person would like to access her or his personal data that you have collected about her or him, then you will need to be you able to provide her or him with this information. We recommend you design a process for this like “Who is dealing with it ?” and check that it is working. If you can answer to the nightmare letter, then you are ready. The needed features for this in Matomo will be available soon.
6 – Lawful basis for processing personal data
There are different lawful basis you can use under GDPR. It can be either “Legitimate interest” or “Explicit consent”. Do not forget to mention it within your privacy policy page. Read more in our article about lawful basis.
7 – Consent
Users should be able to remove their consent at any time. By chance, Matomo is providing a feature in order to do just that : add the opt-out feature to your privacy policy page.
We are also offering a tool that allows you optionally to require consent before any data is tracked. This will be useful if a person should be only tracked after she or he has given explicit consent to be tracked.8 – Children
If your website or app is targeted for children and you are using Matomo, extra measures will need to be taken. For example you will need to write your privacy policy even more clear and moreover getting parents consent if the child is below 13. As it is a very specific case, we strongly recommend you to follow this link for further information.
9 – Data breaches
As you may be collecting personal data with Matomo, you should also check your “data breach procedure” to define if a leak may have consequences on the privacy of the data subject. Please consult ICO’s website for further information.
10 – Data Protection by Design and Data Protection Impact Assessments
Ask yourself if you really need to process personal data within Matomo. If the data you are processing within Matomo is sensitive, we strongly recommend you to make a Data Protection Impact Assessment. A software is available from the The open source PIA software helps to carry out data protection impact assessment, by French Privacy Commissioner : CNIL.
11 – Data Protection Officers
If you are reading this article and you are the Data Protection Officer (DPO), you will not be concerned by this step. If that’s not the case, your duty is to provide to the DPO (if your business has a DPO) our blog post in order for her or him to ask you questions regarding your use of Matomo. Note that your DPO can also be interested in the different data that Matomo can process : “What data does Matomo track ?” (FAQ).
12 – International
Matomo data is hosted wherever you want. So according to the location of the data, you will need to show specific safeguard except for EU. For example regarding the USA, you will have to check if your web hosting platform is registered to the Privacy Shield : privacyshield.gov/list
Note : our Matomo cloud infrastructure is based in France.That’s the end of this blog post. As GDPR is a huge topic, we will release many more blog posts in the upcoming weeks. If there are any Matomo GDPR topic related posts you would like us to write, please feel free to contact us.
The post How to make Matomo GDPR compliant in 12 steps appeared first on Analytics Platform - Matomo.