Please read the general rules for Trained Models in case you are not sure where to post requests or are looking for. npy . Increased page file to 60 gigs, and it started. For this basic deepfake, we’ll use the Quick96 model since it has better support for low-end GPUs and is generally more beginner friendly. SAEHD is a new heavyweight model for high-end cards to achieve maximum possible deepfake quality in 2020. 1) clear workspace. I actually got a pretty good result after about 5 attempts (all in the same training session). I used to run XSEG on a Geforce 1060 6GB and it would run fine at batch 8. Where people create machine learning projects. With Xseg you create mask on your aligned faces, after you apply trained xseg mask, you need to train with SAEHD. Where people create machine learning projects. I just continue training for brief periods, applying new mask, then checking and fixing masked faces that need a little help. Where people create machine learning projects. 2) extract images from video data_src. Training XSeg is a tiny part of the entire process. Python Version: The one that came with a fresh DFL Download yesterday. 3. How to share SAEHD Models: 1. DF Admirer. Contribute to idonov/DeepFaceLab by creating an account on DagsHub. Part 1. A lot of times I only label and train XSeg masks but forgot to apply them and that's how they looked like. There were blowjob XSeg masked faces uploaded by someone before the links were removed by the mods. 0 XSeg Models and Datasets Sharing Thread. This video was made to show the current workflow to follow when you want to create a deepfake with DeepFaceLab. Yes, but a different partition. SAEHD looked good after about 100-150 (batch 16), but doing GAN to touch up a bit. Verified Video Creator. During training, XSeg looks at the images and the masks you've created and warps them to determine the pixel differences in the image. v4 (1,241,416 Iterations). network in the training process robust to hands, glasses, and any other objects which may cover the face somehow. Grab 10-20 alignments from each dst/src you have, while ensuring they vary and try not to go higher than ~150 at first. soklmarle; Jan 29, 2023; Replies 2 Views 597. 3. Where people create machine learning projects. 0 using XSeg mask training (100. Double-click the file labeled ‘6) train Quick96. Tensorflow-gpu 2. Read the FAQs and search the forum before posting a new topic. learned-dst: uses masks learned during training. From the project directory, run 6. oneduality • 4 yr. Describe the SAEHD model using SAEHD model template from rules thread. learned-prd*dst: combines both masks, smaller size of both. During training, XSeg looks at the images and the masks you've created and warps them to determine the pixel differences in the image. 2 is too much, you should start at lower value, use the recommended value DFL recommends (type help) and only increase if needed. Contribute to idonov/DeepFaceLab by creating an account on DagsHub. Phase II: Training. Read the FAQs and search the forum before posting a new topic. 1 participant. 1. Introduction. It has been claimed that faces are recognized as a “whole” rather than the recognition of individual parts. Contribute to idonov/DeepFaceLab by creating an account on DagsHub. Download Nimrat Khaira Faceset - Face: WF / Res: 512 / XSeg: None / Qty: 18,297Contribute to idonov/DeepFaceLab by creating an account on DAGsHub. Enable random warp of samples Random warp is required to generalize facial expressions of both faces. Step 5: Merging. In this DeepFaceLab XSeg tutorial I show you how to make better deepfakes and take your composition to the next level! I’ll go over what XSeg is and some. The fetch. I'm not sure if you can turn off random warping for XSeg training and frankly I don't thing you should, it helps to make the mask training be able to generalize on new data sets. XSeg) data_dst/data_src mask for XSeg trainer - remove. ogt. Pickle is a good way to go: import pickle as pkl #to save it with open ("train. py","contentType":"file"},{"name. To conclude, and answer your question, a smaller mini-batch size (not too small) usually leads not only to a smaller number of iterations of a training algorithm, than a large batch size, but also to a higher accuracy overall, i. It is now time to begin training our deepfake model. Xseg training functions. Very soon in the Colab XSeg training process the faces at my previously SAEHD trained model (140k iterations) already look perfectly masked. #DeepFaceLab #ModelTraning #Iterations #Resolution256 #Colab #WholeFace #Xseg #wf_XSegAs I don't know what the pictures are, I cannot be sure. After the XSeg trainer has loaded samples, it should continue on to the filtering stage and then begin training. If your model is collapsed, you can only revert to a backup. Manually labeling/fixing frames and training the face model takes the bulk of the time. 这一步工作量巨大,要给每一个关键动作都画上遮罩,作为训练数据,数量大约在几十到几百张不等。. cpu_count() // 2. Quick96 seems to be something you want to use if you're just trying to do a quick and dirty job for a proof of concept or if it's not important that the quality is top notch. Verified Video Creator. In my own tests, I only have to mask 20 - 50 unique frames and the XSeg Training will do the rest of the job for you. py","contentType":"file"},{"name. Describe the SAEHD model using SAEHD model template from rules thread. So we develop a high-efficiency face segmentation tool, XSeg, which allows everyone to customize to suit specific requirements by few-shot learning. If you include that bit of cheek, it might train as the inside of her mouth or it might stay about the same. Differences from SAE: + new encoder produces more stable face and less scale jitter. DeepFaceLab 2. Download this and put it into the model folder. 5. Without manually editing masks of a bunch of pics, but just adding downloaded masked pics to the dst aligned folder for xseg training, I'm wondering how DFL learns to. It will likely collapse again however, depends on your model settings quite usually. GPU: Geforce 3080 10GB. During training check previews often, if some faces have bad masks after about 50k iterations (bad shape, holes, blurry), save and stop training, apply masks to your dataset, run editor, find faces with bad masks by enabling XSeg mask overlay in the editor, label them and hit esc to save and exit and then resume XSeg model training, when. #1. The dice, volumetric overlap error, relative volume difference. In addition to posting in this thread or the general forum. #1. 2. 4 cases both for the SAEHD and Xseg, and with enough and not enough pagefile: SAEHD with Enough Pagefile:The DFL and FaceSwap developers have not been idle, for sure: it’s now possible to use larger input images for training deepfake models (see image below), though this requires more expensive video cards; masking out occlusions (such as hands in front of faces) in deepfakes has been semi-automated by innovations such as XSEG training;. In this DeepFaceLab XSeg tutorial I show you how to make better deepfakes and take your composition to the next level! I’ll go over what XSeg is and some. Aug 7, 2022. this happend on both Xsrg and SAEHD training, during initializing phase after loadind in the sample, the prpgram erros and stops memory usege start climbing while loading the Xseg mask applyed facesets. 522 it) and SAEHD training (534. Training; Blog; About;Then I'll apply mask, edit material to fix up any learning issues, and I'll continue training without the xseg facepak from then on. Describe the AMP model using AMP model template from rules thread. + new decoder produces subpixel clear result. I've been trying to use Xseg for the first time, today, and everything looks "good", but after a little training, I'm going back to the editor to patch/remask some pictures, and I can't see the mask overlay. 5. Share. 5) Train XSeg. The guide literally has explanation on when, why and how to use every option, read it again, maybe you missed the training part of the guide that contains detailed explanation of each option. 5) Train XSeg. 3) Gather rich src headset from only one scene (same color and haircut) 4) Mask whole head for src and dst using XSeg editor. With a batch size 512, the training is nearly 4x faster compared to the batch size 64! Moreover, even though the batch size 512 took fewer steps, in the end it has better training loss and slightly worse validation loss. Easy Deepfake tutorial for beginners Xseg. #5732 opened on Oct 1 by gauravlokha. I have now moved DFL to the Boot partition, the behavior remains the same. This forum is for discussing tips and understanding the process involved with Training a Faceswap model. During training check previews often, if some faces have bad masks after about 50k iterations (bad shape, holes, blurry), save and stop training, apply masks to your dataset, run editor, find faces with bad masks by enabling XSeg mask overlay in the editor, label them and hit esc to save and exit and then resume XSeg model training, when. Where people create machine learning projects. At last after a lot of training, you can merge. Do you see this issue without 3D parallelism? According to the documentation, train_batch_size is aggregated by the batch size that a single GPU processes in one forward/backward pass (a. After training starts, memory usage returns to normal (24/32). With a batch size 512, the training is nearly 4x faster compared to the batch size 64! Moreover, even though the batch size 512 took fewer steps, in the end it has better training loss and slightly worse validation loss. XSeg Model Training. DFL 2. 9794 and 0. I have to lower the batch_size to 2, to have it even start. A skill in programs such as AfterEffects or Davinci Resolve is also desirable. 1. This video was made to show the current workflow to follow when you want to create a deepfake with DeepFaceLab. 000 it). 1over137 opened this issue Dec 24, 2020 · 7 comments Comments. For those wanting to become Certified CPTED Practitioners the process will involve the following steps: 1. working 10 times slow faces ectract - 1000 faces, 70 minutes Xseg train freeze after 200 interactions training . Model training is consumed, if prompts OOM. Training. Container for all video, image, and model files used in the deepfake project. Windows 10 V 1909 Build 18363. The Xseg needs to be edited more or given more labels if I want a perfect mask. The Xseg training on src ended up being at worst 5 pixels over. Where people create machine learning projects. 3. 6) Apply trained XSeg mask for src and dst headsets. I've been trying to use Xseg for the first time, today, and everything looks "good", but after a little training, I'm going back to the editor to patch/remask some pictures, and I can't see the mask. 6) Apply trained XSeg mask for src and dst headsets. Today, I train again without changing any setting, but the loss rate for src rised from 0. Notes, tests, experience, tools, study and explanations of the source code. In this DeepFaceLab XSeg tutorial I show you how to make better deepfakes and take your composition to the next level! I’ll go over what XSeg is and some important terminology,. Solution below - use Tensorflow 2. Does the model differ if one is xseg-trained-mask applied while. 训练需要绘制训练素材,就是你得用deepfacelab自带的工具,手动给图片画上遮罩。. Extract source video frame images to workspace/data_src. Contribute to idonov/DeepFaceLab by creating an account on DagsHub. Xseg Training is a completely different training from Regular training or Pre - Training. 3. 0 using XSeg mask training (213. Include link to the model (avoid zips/rars) to a free file. Step 9 – Creating and Editing XSEG Masks (Sped Up) Step 10 – Setting Model Folder (And Inserting Pretrained XSEG Model) Step 11 – Embedding XSEG Masks into Faces Step 12 – Setting Model Folder in MVE Step 13 – Training XSEG from MVE Step 14 – Applying Trained XSEG Masks Step 15 – Importing Trained XSEG Masks to View in MVEMy joy is that after about 10 iterations, my Xseg training was pretty much done (I ran it for 2k just to catch anything I might have missed). Leave both random warp and flip on the entire time while training face_style_power 0 We'll increase this later You want only the start of training to have styles on (about 10-20k interations then set both to 0), usually face style 10 to morph src to dst, and/or background style 10 to fit the background and dst face border better to the src faceDuring training check previews often, if some faces have bad masks after about 50k iterations (bad shape, holes, blurry), save and stop training, apply masks to your dataset, run editor, find faces with bad masks by enabling XSeg mask overlay in the editor, label them and hit esc to save and exit and then resume XSeg model training, when. Contribute to idonov/DeepFaceLab by creating an account on DAGsHub. Step 1: Frame Extraction. XSeg allows everyone to train their model for the segmentation of a spe-Jan 11, 2021. Manually mask these with XSeg. XSeg) train. DLF installation functions. 0 instead. For a 8gb card you can place on. XSEG DEST instead cover the beard (Xseg DST covers it) but cuts the head and hair up. It should be able to use GPU for training. Complete the 4-day Level 1 Basic CPTED Course. 3. Also it just stopped after 5 hours. with XSeg model you can train your own mask segmentator of dst (and src) faces that will be used in merger for whole_face. Easy Deepfake tutorial for beginners Xseg,Deepfake tutorial for beginners,deepfakes tutorial,face swap,deep. 000 it) and SAEHD training (only 80. Frame extraction functions. 5. 3: XSeg Mask Labeling & XSeg Model Training Q1: XSeg is not mandatory because the faces have a default mask. HEAD masks are not ideal since they cover hair, neck, ears (depending on how you mask it but in most cases with short haired males faces you do hair and ears) which aren't fully covered by WF and not at all by FF,. This forum is for reporting errors with the Extraction process. In the XSeg viewer there is a mask on all faces. Contribute to idonov/DeepFaceLab by creating an account on DagsHub. bat I don’t even know if this will apply without training masks. Model first run. RTX 3090 fails in training SAEHD or XSeg if CPU does not support AVX2 - "Illegal instruction, core dumped". You can then see the trained XSeg mask for each frame, and add manual masks where needed. I have an Issue with Xseg training. py","path":"models/Model_XSeg/Model. Actual behavior XSeg trainer looks like this: (This is from the default Elon Musk video by the way) Steps to reproduce I deleted the labels, then labeled again. I have to lower the batch_size to 2, to have it even start. I guess you'd need enough source without glasses for them to disappear. resolution: 128: Increasing resolution requires significant VRAM increase: face_type: f: learn_mask: y: optimizer_mode: 2 or 3: Modes 2/3 place work on the gpu and system memory. Normally at gaming temps reach high 85-90, and its confirmed by AMD that the Ryzen 5800H is made that way. 00:00 Start00:21 What is pretraining?00:50 Why use i. PayPal Tip Jar:Lab Tutorial (basic/standard):Channel (He. npy","path. Download RTT V2 224;Same problem here when I try an XSeg train, with my rtx2080Ti (using the rtx2080Ti build released on the 01-04-2021, same issue with end-december builds, work only with the 12-12-2020 build). GPU: Geforce 3080 10GB. bat. The images in question are the bottom right and the image two above that. when the rightmost preview column becomes sharper stop training and run a convert. I've already made the face path in XSeg editor and trained it But now when I try to exectue the file 5. Mark your own mask only for 30-50 faces of dst video. Contribute to idonov/DeepFaceLab by creating an account on DagsHub. The software will load all our images files and attempt to run the first iteration of our training. I have an Issue with Xseg training. GameStop Moderna Pfizer Johnson & Johnson AstraZeneca Walgreens Best Buy Novavax SpaceX Tesla. 2. Contribute to idorg/DeepFaceLab by creating an account on DagsHub. Contribute to idonov/DeepFaceLab by creating an account on DagsHub. Do not mix different age. Keep shape of source faces. I'll try. added 5. this happend on both Xsrg and SAEHD training, during initializing phase after loadind in the sample, the prpgram erros and stops memory usege start climbing while loading the Xseg mask applyed facesets. And this trend continues for a few hours until it gets so slow that there is only 1 iteration in about 20 seconds. first aply xseg to the model. 27 votes, 16 comments. I have 32 gigs of ram, and had a 40 gig page file, and still got these page file errors when starting saehd. 3. Include link to the model (avoid zips/rars) to a free file sharing of your choice (google drive, mega). Pass the in. Sep 15, 2022. if i lower the resolution of the aligned src , the training iterations go faster , but it will STILL take extra time on every 4th iteration. The designed XSEG-Net model was then trained for segmenting the chest X-ray images, with the results being used for the analysis of heart development and clinical severity. updated cuda and cnn and drivers. a. Do not post RTM, RTT, AMP or XSeg models here, they all have their own dedicated threads: RTT MODELS SHARING RTM MODELS SHARING AMP MODELS SHARING XSEG MODELS AND DATASETS SHARING 4. It works perfectly fine when i start Training with Xseg but after a few minutes it stops for a few seconds and then continues but slower. bat removes labeled xseg polygons from the extracted frames{"payload":{"allShortcutsEnabled":false,"fileTree":{"models/Model_XSeg":{"items":[{"name":"Model. Post processing. . 0 Xseg Tutorial. ** Steps to reproduce **i tried to clean install windows , and follow all tips . then copy pastE those to your xseg folder for future training. I don't see any problems with my masks in the xSeg trainer and I'm using masked training, most other settings are default. Use the 5. I have 32 gigs of ram, and had a 40 gig page file, and still got these page file errors when starting saehd training. in xseg model the exclusions indeed are learned and fine, the issue new is in training preview, it doesn't show that , i haven't done yet, so now sure if its a preview bug what i have done so far: - re checked frames to see if. It is now time to begin training our deepfake model. Contribute to idorg/DeepFaceLab by creating an account on DagsHub. bat. Training XSeg is a tiny part of the entire process. 2. The best result is obtained when the face is filmed from a short period of time and does not change the makeup and structure. Extra trained by Rumateus. 000. Easy Deepfake tutorial for beginners Xseg. on a 320 resolution it takes upto 13-19 seconds . Open gili12345 opened this issue Aug 27, 2021 · 3 comments Open xseg train not working #5389. This seems to even out the colors, but not much more info I can give you on the training. Mar 27, 2021 #1 (account deleted) Groggy4 NotSure. And then bake them in. 1 Dump XGBoost model with feature map using XGBClassifier. 023 at 170k iterations, but when I go to the editor and look at the mask, none of those faces have a hole where I have placed a exclusion polygon around. 05 and 0. After the draw is completed, use 5. When loading XSEG on a Geforce 3080 10GB it uses ALL the VRAM. Then if we look at the second training cycle losses for each batch size : Leave both random warp and flip on the entire time while training face_style_power 0 We'll increase this later You want only the start of training to have styles on (about 10-20k interations then set both to 0), usually face style 10 to morph src to dst, and/or background style 10 to fit the background and dst face border better to the src face. But doing so means redo extraction while the XSEG masks just save them with XSEG_fetch, redo the Xseg training, apply, check and launch the SAEHD training. , gradient_accumulation_ste. Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community. The only available options are the three colors and the two "black and white" displays. . workspace. ]. XSeg training GPU unavailable #5214. Get any video, extract frames as jpg and extract faces as whole face, don't change any names, folders, keep everything in one place, make sure you don't have any long paths or weird symbols in the path names and try it again. Search for celebs by name and filter the results to find the ideal faceset! All facesets are released by members of the DFL community and are "Safe for Work". I don't see any problems with my masks in the xSeg trainer and I'm using masked training, most other settings are default. Xseg apply/remove functions. DeepFaceLab code and required packages. Deletes all data in the workspace folder and rebuilds folder structure. Final model config:===== Model Summary ==. And this trend continues for a few hours until it gets so slow that there is only 1 iteration in about 20 seconds. DeepFaceLab is the leading software for creating deepfakes. After training starts, memory usage returns to normal (24/32). BAT script, open the drawing tool, draw the Mask of the DST. Applying trained XSeg model to aligned/ folder. Tensorflow-gpu. Post in this thread or create a new thread in this section (Trained Models). . Sydney Sweeney, HD, 18k images, 512x512. Consol logs. Again, we will use the default settings. #4. How to Pretrain Deepfake Models for DeepFaceLab. Xseg editor and overlays. pak” archive file for faster loading times 47:40 – Beginning training of our SAEHD model 51:00 – Color transfer. Unfortunately, there is no "make everything ok" button in DeepFaceLab. As you can see the output show the ERROR that was result in a double 'XSeg_' in path of XSeg_256_opt. X. Describe the XSeg model using XSeg model template from rules thread. 1. Instead of using a pretrained model. {"payload":{"allShortcutsEnabled":false,"fileTree":{"facelib":{"items":[{"name":"2DFAN. Lee - Dec 16, 2019 12:50 pm UTCForum rules. 1. It really is a excellent piece of software. Notes, tests, experience, tools, study and explanations of the source code. XSegged with Groggy4 's XSeg model. Everything is fast. I do recommend che. Even though that. xseg) Data_Dst Mask for Xseg Trainer - Edit. Doing a rough project, I’ve run generic XSeg, going through the frames in edit on the destination, several frames have picked up the background as part of the face, may be a silly question, but if I manually add the mask boundary in edit view do I have to do anything else to apply the new mask area or will that not work, it. + pixel loss and dssim loss are merged together to achieve both training speed and pixel trueness. It's doing this to figure out where the boundary of the sample masks are on the original image and what collections of pixels are being included and excluded within those boundaries. But I have weak training. Download Celebrity Facesets for DeepFaceLab deepfakes. Put those GAN files away; you will need them later. It learns this to be able to. 16 XGBoost produce prediction result and probability. Run: 5. You can use pretrained model for head. train untill you have some good on all the faces. However, since some state-of-the-art face segmentation models fail to generate fine-grained masks in some partic-ular shots, the XSeg was introduced in DFL. #5726 opened on Sep 9 by damiano63it. Otherwise, you can always train xseg in collab and then download the models and apply it to your data srcs and dst then edit them locally and reupload to collabe for SAEHD training. Video created in DeepFaceLab 2. Change: 5. Where people create machine learning projects. During training check previews often, if some faces have bad masks after about 50k iterations (bad shape, holes, blurry), save and stop training, apply masks to your dataset, run editor, find faces with bad masks by enabling XSeg mask overlay in the editor, label them and hit esc to save and exit and then resume XSeg model training, when. Just let XSeg run a little longer. When the face is clear enough, you don't need to do manual masking, you can apply Generic XSeg and get. k. 0 using XSeg mask training (213. 192 it). XSeg apply takes the trained XSeg masks and exports them to the data set. Copy link 1over137 commented Dec 24, 2020. learned-prd+dst: combines both masks, bigger size of both. On conversion, the settings listed in that post work best for me, but it always helps to fiddle around. even pixel loss can cause it if you turn it on too soon, I only use those. The more the training progresses, the more holes in the SRC model (who has short hair) will open up where the hair disappears. The Xseg training on src ended up being at worst 5 pixels over. XSeg) data_src trained mask - apply. How to share SAEHD Models: 1. 建议萌. Consol logs. Plus, you have to apply the mask after XSeg labeling & training, then go for SAEHD training. Describe the XSeg model using XSeg model template from rules thread. Increased page file to 60 gigs, and it started. Double-click the file labeled ‘6) train Quick96. 3) Gather rich src headset from only one scene (same color and haircut) 4) Mask whole head for src and dst using XSeg editor. cpu_count = multiprocessing. Include link to the model (avoid zips/rars) to a free file sharing of your choice (google drive, mega) In addition to posting in this thread or. Include link to the model (avoid zips/rars) to a free file sharing of your choice (google drive, mega) In addition to posting in this thread or. I was less zealous when it came to dst, because it was longer and I didn't really understand the flow/missed some parts in the guide. Where people create machine learning projects. 18K subscribers in the SFWdeepfakes community. k. py","contentType":"file"},{"name. You can use pretrained model for head. bat,会跳出界面绘制dst遮罩,就是框框抠抠,这是个细活儿,挺累的。 运行train. Contribute to idonov/DeepFaceLab by creating an account on DagsHub. prof. All reactions1. Contribute to idonov/DeepFaceLab by creating an account on DagsHub. thisdudethe7th Guest. . It must work if it does for others, you must be doing something wrong. 9 XGBoost Best Iteration. I've posted the result in a video. XSeg-dst: uses trained XSeg model to mask using data from destination faces. Requesting Any Facial Xseg Data/Models Be Shared Here. Include link to the model (avoid zips/rars) to a free file sharing of your choice (google drive, mega). Only deleted frames with obstructions or bad XSeg. Xseg editor and overlays. Problems Relative to installation of "DeepFaceLab". npy","path":"facelib/2DFAN. MikeChan said: Dear all, I'm using DFL-colab 2. 2) Use “extract head” script. If it is successful, then the training preview window will open. Where people create machine learning projects. Download Megan Fox Faceset - Face: F / Res: 512 / XSeg: Generic / Qty: 3,726Contribute to idonov/DeepFaceLab by creating an account on DagsHub. As I understand it, if you had a super-trained model (they say its 400-500 thousand iterations) for all face positions, then you wouldn’t have to start training every time. gili12345 opened this issue Aug 27, 2021 · 3 comments Comments. During training check previews often, if some faces have bad masks after about 50k iterations (bad shape, holes, blurry), save and stop training, apply masks to your dataset, run editor, find faces with bad masks by enabling XSeg mask overlay in the editor, label them and hit esc to save and exit and then resume XSeg model training, when. XSeg-prd: uses. 5. remember that your source videos will have the biggest effect on the outcome!Out of curiosity I saw you're using xseg - did you watch xseg train, and then when you see a spot like those shiny spots begin to form, stop training and go find several frames that are like the one with spots, mask them, rerun xseg and watch to see if the problem goes away, then if it doesn't mask more frames where the shiniest faces. If you want to get tips, or better understand the Extract process, then. py","path":"models/Model_XSeg/Model. . 7) Train SAEHD using ‘head’ face_type as regular deepfake model with DF archi. Keep shape of source faces. That just looks like "Random Warp". In addition to posting in this thread or the general forum. com XSEG Stands For : X S Entertainment GroupObtain the confidence needed to safely operate your Niton handheld XRF or LIBS analyzer. after that just use the command. Does model training takes into account applied trained xseg mask ? eg. Does Xseg training affects the regular model training? eg. 522 it) and SAEHD training (534. . pkl", "w") as f: pkl. . If you want to see how xseg is doing, stop training, apply, the open XSeg Edit. Then I'll apply mask, edit material to fix up any learning issues, and I'll continue training without the xseg facepak from then on.