
Recherche avancée
Autres articles (39)
-
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (9872)
-
How to Replace Duplicate Frames in a Video with Black Frames using ffmpeg ?
21 mars 2021, par Yam ShargilI'm trying to trim all "no action, no movement" frames out of my screen recording. Some of my screen recordings are really long (like 100 hours long).


I found this :
How to Simply Remove Duplicate Frames from a Video using ffmpeg


ffmpeg -i in.mp4 -vf
"select='if(gt(scene,0.01),st(1,t),lte(t-ld(1),1))',setpts=N/FRAME_RATE/TB"
trimmed.mp4



I don't want to lose any important frames, so for testing the threshold purposes, I want to replace (not remove) all the "no action" frames with black frames.


That's my best shot so far, not my proudest work :


ffmpeg -i in.mp4 -vf "select='if(gt(scene,0.01),st(1,t),lte(t-ld(1),1))',drawbox=color=black:t=fill" out.mp4



-
Sending Pictures over Network with Unity C#
30 septembre 2020, par TobiGG B.I am currently working with the HoloLens and thus Unity. The idea is to make a webcam Livestream with as little delay as possible to an instance of FFmpeg on a different device for encoding. However, I run into some issues.


My first problem is the Conversion from a Color32 Array to a Byte Array. It creates a huge amount of bandwidth, especially when I am trying to record and stream with 30fps and a 1280x720 resolution.
Here is some code :


IEnumerator Init()
 {
 webCam = new WebCamTexture(1280, 720, 30);
 webCam.Play();
 image.texture = webCam;
 Debug.Log("WebCam Width: " + webCam.width);
 Debug.Log("WebCam Height: " + webCam.height);
 currentTexture = new Texture2D(webCam.width, webCam.height);
 
 data = new Color32[webCam.width * webCam.height];
 listener = new TcpListener(IPAddress.Any, 10305);
 listener.Start();



 while (webCam.width < 100)
 {
 
 yield return null;
 }

 StartCoroutine("Recording");
 }

 WaitForEndOfFrame endOfFrame = new WaitForEndOfFrame();

 IEnumerator Recording()
 {

 TcpClient tcpClient = null;
 NetworkStream tcpStream = null;
 bool isConnected = false;

 Loom.RunAsync(() =>
 {
 while(!stopCapture)
 {
 tcpClient = listener.AcceptTcpClient();
 Debug.Log("Client Connected!");
 isConnected = true;
 tcpStream = tcpClient.GetStream();
 }
 });
 while(!isConnected)
 {
 yield return endOfFrame;
 }
 readyToGetFrame = true;
 byte[] messageLength = new byte[SEND_RECEIVE_COUNT];
 while(!stopCapture)
 {
 yield return endOfFrame;

 webCam.GetPixels32(data);
 byte[] webCamBytes = Utilities.Color32ArrayToByteArray(data);
 
 readyToGetFrame = false;
 Loom.RunAsync(() => {
 tcpStream.Write(webCamBytes, 0, webCamBytes.Length);
 readyToGetFrame = true;
 });

 while (!readyToGetFrame)
 {
 yield return endOfFrame;
 }
 }
 }



public static class Utilities
{
 public static byte[] Color32ArrayToByteArray(Color32[] colors)
 {
 if (colors == null || colors.Length == 0)
 {
 return null;
 }

 int lengthOfColor32 = Marshal.SizeOf(typeof(Color32));
 int byteArrayLength = lengthOfColor32 * colors.Length;
 byte[] bytes = new byte[byteArrayLength];

 GCHandle handle = default(GCHandle);

 try
 {
 handle = GCHandle.Alloc(colors, GCHandleType.Pinned);
 IntPtr ptr = handle.AddrOfPinnedObject();
 Marshal.Copy(ptr, bytes, 0, byteArrayLength);
 }
 finally
 {
 if(handle != default(GCHandle))
 {
 handle.Free();
 }
 }

 return bytes;
 }
}



To fix this problem, I tried to use EncodePNG and EncodeJPG instead. But there is my second problem. If I use these methods, I got a big loss of performance.


So instead of this


webCam.GetPixels32(data);
 byte[] webCamBytes = Utilities.Color32ArrayToByteArray(data);



I use this


currentTexture.SetPixels(webCam.GetPixels());
 byte[] webCamBytes = currentTexture.EncodeToJPG();



Thus my question :
Is there any way to reduce the bandwith by maybe cutting off the alpha values as they are not needed ? Or is there another good way such as converting to a different color or picture format ?


Because right now I ran out of usable ideas and feel kinda stuck. Might just be me though ;-)


PS : Using external libraries is a tad difficult as it needs to run on HoloLens and due to other regulations.


Thank you in advance !


-
CJEU rules US cloud servers don’t comply with GDPR and what this means for web analytics
17 juillet 2020, par Jake Thornton