
Recherche avancée
Autres articles (48)
-
XMP PHP
13 mai 2011, parDixit Wikipedia, XMP signifie :
Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...) -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)
Sur d’autres sites (5089)
-
a lot of GREEN Color at YUV420p —> RGB in OpenGL 2.0 Shader on iOS
25 octobre 2012, par 이형근I want to make a movie player for iOS using ffmpeg and OpenGL ES 2.0
but I have some problem. Output RGB image has a lot of GREEN color.
This is code and images- 480x320 width & height :
- 512x512 Texture width & height
I got a YUV420p row data from ffmpeg AVFrame.
for (int i = 0, nDataLen = 0; i < 3; i++) {
int nShift = (i == 0) ? 0 : 1;
uint8_t *pYUVData = (uint8_t *)_frame->data[i];
for (int j = 0; j < (mHeight >> nShift); j++) {
memcpy(&pData->pOutBuffer[nDataLen], pYUVData, (mWidth >> nShift));
pYUVData += _frame->linesize[i];
nDataLen += (mWidth >> nShift);
}
}and prepare texture for Y, U & V channel.
//: U Texture
if (sampler1Texture) glDeleteTextures(1, &sampler1Texture);
glActiveTexture(GL_TEXTURE1);
glGenTextures(1, &sampler1Texture);
glBindTexture(GL_TEXTURE_2D, sampler1Texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// This is necessary for non-power-of-two textures
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glEnable(GL_TEXTURE_2D);
glTexImage2D(GL_TEXTURE_2D,
0,
GL_LUMINANCE,
texW / 2,
texH / 2,
0,
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
NULL);
//: V Texture
if (sampler2Texture) glDeleteTextures(1, &sampler2Texture);
glActiveTexture(GL_TEXTURE2);
glGenTextures(1, &sampler2Texture);
glBindTexture(GL_TEXTURE_2D, sampler2Texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// This is necessary for non-power-of-two textures
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glEnable(GL_TEXTURE_2D);
glTexImage2D(GL_TEXTURE_2D,
0,
GL_LUMINANCE,
texW / 2,
texH / 2,
0,
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
NULL);
//: Y Texture
if (sampler0Texture) glDeleteTextures(1, &sampler0Texture);
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &sampler0Texture);
glBindTexture(GL_TEXTURE_2D, sampler0Texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// This is necessary for non-power-of-two textures
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glEnable(GL_TEXTURE_2D);
glTexImage2D(GL_TEXTURE_2D,
0,
GL_LUMINANCE,
texW,
texH,
0,
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
NULL);Rendering part is below.
int _idxU = mFrameW * mFrameH;
int _idxV = _idxU + (_idxU / 4);
// U data
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, sampler1Texture);
glUniform1i(sampler1Uniform, 1);
glTexSubImage2D(
GL_TEXTURE_2D,
0,
0,
0,
mFrameW / 2, // source width
mFrameH / 2, // source height
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
&_frameData[_idxU]);
// V data
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, sampler2Texture);
glUniform1i(sampler2Texture, 2);
glTexSubImage2D(
GL_TEXTURE_2D,
0,
0,
0,
mFrameW / 2, // source width
mFrameH / 2, // source height
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
&_frameData[_idxV]);
// Y data
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, sampler0Texture);
glUniform1i(sampler0Uniform, 0);
glTexSubImage2D(
GL_TEXTURE_2D,
0,
0,
0,
mFrameW, // source width
mFrameH, // source height
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
_frameData);Vertex Shader & Fragment Shader is below.
attribute vec4 Position;
attribute vec2 TexCoordIn;
varying vec2 TexCoordOut;
varying vec2 TexCoordOut_UV;
uniform mat4 Projection;
uniform mat4 Modelview;
void main()
{
gl_Position = Projection * Modelview * Position;
TexCoordOut = TexCoordIn;
}
uniform sampler2D sampler0; // Y Texture Sampler
uniform sampler2D sampler1; // U Texture Sampler
uniform sampler2D sampler2; // V Texture Sampler
varying highp vec2 TexCoordOut;
void main()
{
highp float y = texture2D(sampler0, TexCoordOut).r;
highp float u = texture2D(sampler2, TexCoordOut).r - 0.5;
highp float v = texture2D(sampler1, TexCoordOut).r - 0.5;
//y = 0.0;
//u = 0.0;
//v = 0.0;
highp float r = y + 1.13983 * v;
highp float g = y - 0.39465 * u - 0.58060 * v;
highp float b = y + 2.03211 * u;
gl_FragColor = vec4(r, g, b, 1.0);
}Y Texture (Grayscale) is correct but U & V has a lot of Green Color.
So final RGB image (Y+U+V) has a lot of GREEN Color.
What's the problem ?Please help.
thanks. -
Fragmented MP4 has wrong duration in Windows Media Player
3 février 2020, par ExpressingxI’m streaming video from a camera to a file using
ffmpeg
. The case to useFragmented MP4
is that there can be sudden power off and lose the video. I’ve configured the fragments to be at least 30 secs and this is how much theWindows Media Player
reports for the duration even if its 2 hours long, after that it keeps streaming the video but is not seekable after the 30th second. How can be that fixed ?ffmpeg -c copy -movflags frag_keyframe+separate_moof+omit_tfhd_offset -min_frag_duration 30000000 -bsf:a aac_adtstoasc \ out.mp4
If I use
-movflags frag_keyframe+empty_moov
then the video is not seekable at all and no duration reported.I aim for seekable video with correct duration both on VLC/Windows Media Player
Version of ffmpeg : 3.3.4
-
FFMpeg Command work in command line, but not in python script. (Semi Solved)
20 février 2015, par FooldjOkay, kind of a weird problem. But I’m not sure whether it’s python, ffmpeg, or some stupid thing I’m doing wrong.
I’m trying to take a video, and take 1 frame a second, and output that frame to an image. Right now, if i use the command line with ffmpeg :
ffmpeg -i test.avi -r 1 -f image2 image-%3d.jpeg -pix_fmt rgb24 -vcodec rawrvideo
It outputs about 10 images, the images look fine, awesome. Now I have this code (right now some code from some github, as I wanted stuff that i was relatively sure would work, and mine is allll convoluted)
import subprocess as sp
import numpy as np
import re
import cv2
import time
FFMPEG_BIN = r'ffmpeg.exe'
INPUT_VID = 'test.avi'
def getInfo():
command = [FFMPEG_BIN,'-i', INPUT_VID, '-']
pipe = sp.Popen(command, stdout=sp.PIPE, stderr=sp.PIPE)
pipe.stdout.readline()
pipe.terminate()
infos = pipe.stderr.read()
infos_list = infos.split('\r\n')
res = re.search(' \d+x\d+ ',infos)
res = [int(x) for x in res.group(0).split('x')]
return res
res = getInfo()
command = [ FFMPEG_BIN,
'-i', INPUT_VID,
'-f', 'image2pipe',
'-pix_fmt', 'rgb24',
'-vcodec', 'rawvideo', '-']
pipe = sp.Popen(command, stdout = sp.PIPE, bufsize=10**8)
n = 0
im2 = []
try:
mog = cv2.BackgroundSubtractorMOG2(120,2,True)
while True:
raw_image = pipe.stdout.read(res[0]*res[1]*3)
# transform the byte read into a numpy array
image = np.fromstring(raw_image, dtype='uint8')
image = image.reshape((res[1],res[0],3))
rgbImg = image.copy()
fname = ('_tmp%03d.png'%time.time())
cv2.imwrite(fname, rgbImg)
# throw away the data in the pipe's buffer.
#pipe.stdout.flush()
n += 1
print n
except:
print 'done',n
pipe.kill()
cv2.destroyAllWindows()When I run this, I get 10 images, but they all have a Blue Tint ! I cannot for the life of me figure out why. I’ve done tons of searches, I’ve tried quite a few different codecs (usually just messes things up worse). The media info for the video file is here :
General
Complete name : test.avi
Format : AVI
Format/Info : Audio Video Interleave
File size : 85.0 KiB
Duration : 133ms
Overall bit rate : 5 235 Kbps
Video
ID : 0
Format : JPEG
Codec ID : MJPG
Duration : 133ms
Bit rate : 1 240 Kbps
Width : 640 pixels
Height : 480 pixels
Display aspect ratio : 4:3
Frame rate : 30.000 fps
Color space : YUV
Chroma subsampling : 4:2:2
Bit depth : 8 bits
Compression mode : Lossy
Bits/(Pixel*Frame) : 0.135
Stream size : 20.1 KiB (24%)Any suggestions ? It seems like it should be an RGB mixup...just not sure where at...
EDIT : So I fixed the problem by switching the blue and red channels with this code :
bChannel = rgbImg[ :, :,0]
rChannel = rgbImg[ :, :,2]
gChannel = rgbImg[ :, :,1]rgbArray = np.zeros((res[1],res[0],3), 'uint8')
rgbArray[...,0] = rChannel
rgbArray[...,1] = gChannel
rgbArray[...,2] = bChannelSo I guess this is now a question of, why is python mixing up these channels ? Is it a problem with python, or ffmpeg, the codec ?
Thanks !