
Recherche avancée
Autres articles (67)
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Menus personnalisés
14 novembre 2010, parMediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
Menus créés à l’initialisation du site
Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)
Sur d’autres sites (8152)
-
Accord.Video.FFMPEG.VideoFileWriter writes different data from the input data
14 novembre 2017, par Rasool AhmedI’m working on project that encrypting video frames using RC4 algorithm and save these frames in playable video file.
I used a package named Accord.Video.FFMPEG. This package has a classes (VideoFileReader, & VideoFileWriter) that reads and writes video frames.
The first step is reads the video :
VideoHandler v = new VideoHandler();
OpenFileDialog newimag = new OpenFileDialog();
if (newimag.ShowDialog() == DialogResult.OK)
{
textfile = newimag.FileName;
picbox.ImageLocation = textfile;
status1.Text = "loaded";
//MessageBox.Show("your video file have been loaded seccufly"); // is an idea for viwing message
}
bytedata = v.ReadAllFrameBytes(textfile);The second step is encrypting the frames of video :
byte[] bn = new byte[bytedata.Length];// new array to hold the encryptred file
bn = Encrypt(bytedata, ba);The last step is saving the encrypted frames :
v.WriteFramesData(newfilepath, bn);
My encrypting algorithm is encrypting and decrypting the cipher with same algorithm and key.
These stpes works on Text, and Images files, but when I use it on video I can’t restore the encrypted video. After some testings, I found that VideoFileWriter dosn’t write the same input frames. Whyyyyyyy ?
Here is my VideoFileHandler I made it :
using System;
using System.Collections.Generic;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Drawing.Imaging;
using System.IO;
using Accord.Video.FFMPEG;
namespace imgtobyt_1_in_c
{
class VideoHandler
{
public List data;
public byte[] imagedata;
public int Height, Width;
public byte[] ReadAllFrameBytes(string FileName)
{
// create instance of video reader
VideoFileReader reader = new VideoFileReader();
// open video file
reader.Open(FileName);
Height = reader.Height;
Width = reader.Width;
data = new List();
// read video frames
for (int i = 0; i < 100; i++) //change 100 to reader.FrameCount
{
Bitmap videoFrame = reader.ReadVideoFrame();
byte[] framebytes = GetBytesFromFrame(videoFrame);
data.Add(framebytes);
// dispose the frame when it is no longer required
videoFrame.Dispose();
}
reader.Close();
imagedata = new byte[data.Count * data[0].Length];
int c = 0;
for (int i = 0; i < data.Count; i++)
{
byte[] d = data[i];
for (int x = 0; x < d.Length; x++)
{
imagedata[c] = d[x];
c++;
}
}
return imagedata;
}
public byte[] GetBytesFromFrame(Bitmap Frame)
{
LockBitmap lockBitmap = new LockBitmap(Frame);
lockBitmap.LockBits();
byte[] framebytes = new byte[Frame.Width * Frame.Height * 3];
int z = 0;
for (int x = 0; x < lockBitmap.Height; x++)
for (int y = 0; y < lockBitmap.Width; y++)
{
Color Pixel = lockBitmap.GetPixel(y, x);
framebytes[z] = Pixel.R;
z++;
framebytes[z] = Pixel.G;
z++;
framebytes[z] = Pixel.B;
z++;
}
lockBitmap.UnlockBits();
return framebytes;
//using (var stream = new MemoryStream())
//{
// Frame.Save(stream, System.Drawing.Imaging.ImageFormat.Png);
// return stream.ToArray();
//}
}
public Bitmap GetFrameFromBytes(byte[] Framebytes, ref int offset, int Width, int Height)
{
Bitmap Frame = new Bitmap(Width, Height, PixelFormat.Format24bppRgb);
LockBitmap lockBitmap = new LockBitmap(Frame);
lockBitmap.LockBits();
for (int x = 0; x < Height; x++)
for (int y = 0; y < Width; y++)
{
Color Pixel = Color.FromArgb(Framebytes[offset], Framebytes[offset + 1], Framebytes[offset + 2]); offset += 3;
lockBitmap.SetPixel(y, x, Pixel);
}
lockBitmap.UnlockBits();
return Frame;
//Bitmap bmp;
//using (var ms = new MemoryStream(Framebytes))
//{
// bmp = new Bitmap(ms);
//}
//return bmp;
}
public void WriteFramesData(string FileName, byte[] data)
{
// create instance of video writer
VideoFileWriter writer = new VideoFileWriter();
// create new video file
writer.Open(FileName, Width, Height);
int offset = 0;
// write video frames
for (int i = 0; i < 100; i++)
{
// create a bitmap to save into the video file
Bitmap Frame = GetFrameFromBytes(data, ref offset, Width, Height);
writer.WriteVideoFrame(Frame);
}
writer.Close();
}
}
}Please, I need to make this works.
-
how to us google ML kit Selfie segmentation in flutter to overlay segmented user on top of chosen image or video ? [closed]
4 avril 2024, par AmrI'm working on implementing a virtual background feature using the Google ML Kit selfie segmentation Flutter plugin. The goal is to separate the user from the background live from the camera feed, overlay the user on a chosen image or video, and then create a new video combining these elements.


Currently, I'm successfully obtaining the mask using the plugin. However, I'm concerned about performance issues if I directly copy pixels to achieve the overlay effect, especially in real-time scenarios.


I'm looking for alternative approaches or best practices to efficiently achieve this functionality without sacrificing performance. Any insights or suggestions on how to approach this would be greatly appreciated.


I thought about using ffmpeg but not sure how to take the user pixels from the camera feed, I am not a flutter developer so not sure how to do so.


-
Revision f167433d9c : fix the mv_ref_idx issue The following issue was reported : https://code.google
20 août 2013, par Jim BankoskiChanged Paths :
Modify /vp9/common/vp9_mvref_common.c
fix the mv_ref_idx issueThe following issue was reported :
https://code.google.com/p/webm/issues/detail?id=601&q=jimbankoski&sort=-id&colsp
ec=ID%20Pri%20mstone%20ReleaseBlock%20Type%20Component%20Status%20Owner%20Summar
yThis code makes the choice and code cleaner and removes any question
about whether the border needs to be checked.Change-Id : Ia7aecfb3168e340618805bd318499176c2989597