persist

Doctrine2 - Get entity ID before flush

假如想象 提交于 2019-12-04 01:10:02
Is there any way to get an entity ID before the persist/flush? I mean: $entity = new PointData(); $form = $this->createForm(new PointDataType(), $entity); If I try $entity->getId() at this point, it returns nothing. I can get it working by: $em->persist($entity); $em->flush(); (supposing $em = $this->getDoctrine()->getEntityManager(); ) How can I achieve this? If you want to know the ID of an entity before it's been persisted to the database, then you obviously can't use generated identifiers. You'll need to find some way to generate unique identifiers yourself (perhaps some kind of hash

Stop Hibernate from updating collections when they have not changed

坚强是说给别人听的谎言 提交于 2019-12-03 17:19:57
问题 I have two entity beans defined as follows (unrelated stuff removed): @Entity @Table(...) public class MasterItem implements java.io.Serializable { private Set<CriticalItems> criticalItemses = new HashSet<CriticalItems>(0); @OneToMany(fetch = FetchType.EAGER, mappedBy = "masterItem", orphanRemoval = true, cascade = {javax.persistence.CascadeType.DETACH}) @Cascade({CascadeType.SAVE_UPDATE, CascadeType.DELETE}) public Set<CriticalItems> getCriticalItemses() { return this.criticalItemses; } }

Where is my sparkDF.persist(DISK_ONLY) data stored?

此生再无相见时 提交于 2019-12-03 13:36:35
问题 I want to understand more about the persisting strategy of hadoop out of spark. When I persist a dataframe with the DISK_ONLY-strategy where is my data stored (path/folder...)? And where do I specify this location? 回答1: For the short answer we can just have a look at the documentation regarding spark.local.dir : Directory to use for "scratch" space in Spark, including map output files and RDDs that get stored on disk. This should be on a fast, local disk in your system. It can also be a comma

Spark: Difference between Shuffle Write, Shuffle spill (memory), Shuffle spill (disk)?

江枫思渺然 提交于 2019-12-03 11:00:47
问题 I have the following spark job, trying to keep everything in memory: val myOutRDD = myInRDD.flatMap { fp => val tuple2List: ListBuffer[(String, myClass)] = ListBuffer() : tuple2List }.persist(StorageLevel.MEMORY_ONLY).reduceByKey { (p1, p2) => myMergeFunction(p1,p2) }.persist(StorageLevel.MEMORY_ONLY) However, when I looked in to the job tracker, I still have a lot of Shuffle Write and Shuffle spill to disk ... Total task time across all tasks: 49.1 h Input Size / Records: 21.6 GB / 102123058

Stop Hibernate from updating collections when they have not changed

拜拜、爱过 提交于 2019-12-03 06:16:26
I have two entity beans defined as follows (unrelated stuff removed): @Entity @Table(...) public class MasterItem implements java.io.Serializable { private Set<CriticalItems> criticalItemses = new HashSet<CriticalItems>(0); @OneToMany(fetch = FetchType.EAGER, mappedBy = "masterItem", orphanRemoval = true, cascade = {javax.persistence.CascadeType.DETACH}) @Cascade({CascadeType.SAVE_UPDATE, CascadeType.DELETE}) public Set<CriticalItems> getCriticalItemses() { return this.criticalItemses; } } CriticalItems is defined as follows: @Entity @Table(...) public class CriticalItems implements java.io

Spark: Difference between Shuffle Write, Shuffle spill (memory), Shuffle spill (disk)?

时光总嘲笑我的痴心妄想 提交于 2019-12-03 04:51:46
I have the following spark job, trying to keep everything in memory: val myOutRDD = myInRDD.flatMap { fp => val tuple2List: ListBuffer[(String, myClass)] = ListBuffer() : tuple2List }.persist(StorageLevel.MEMORY_ONLY).reduceByKey { (p1, p2) => myMergeFunction(p1,p2) }.persist(StorageLevel.MEMORY_ONLY) However, when I looked in to the job tracker, I still have a lot of Shuffle Write and Shuffle spill to disk ... Total task time across all tasks: 49.1 h Input Size / Records: 21.6 GB / 102123058 Shuffle write: 532.9 GB / 182440290 Shuffle spill (memory): 370.7 GB Shuffle spill (disk): 15.4 GB

JPA merge vs. persist

≡放荡痞女 提交于 2019-12-03 01:34:03
问题 So far, my preference has been to always use EntityManager's merge() take care of both insert and update. But I have also noticed that merge performs an additional select queries before update/insert to ensure record does not already exists in the database. Now that I am working on a project requiring extensive (bulk) inserts to the database. From a performance point of view does it make sense to use persist instead of merge in a scenario where I absolutely know that I am always creating a

PreUpdate entity symfony LifecycleCallbacks

丶灬走出姿态 提交于 2019-12-02 19:11:51
问题 I have a little problem with the PreUpdate LifecycleCallbacks in Symfony. I have an entity User with a OneToMany relation with an entity product. class User{ /** * @ORM\OneToMany(targetEntity="Product", mappedBy="formulario", cascade={"persist", "remove"}) */ private $products; } class Product{ /** * @ORM\ManyToOne(targetEntity="User", inversedBy="products") * @ORM\JoinColumn(name="user", referencedColumnName="id") */ private $user; } My problem is when I add or remove a product from the User

JPA merge vs. persist

醉酒当歌 提交于 2019-12-02 15:03:05
So far, my preference has been to always use EntityManager's merge() take care of both insert and update. But I have also noticed that merge performs an additional select queries before update/insert to ensure record does not already exists in the database. Now that I am working on a project requiring extensive (bulk) inserts to the database. From a performance point of view does it make sense to use persist instead of merge in a scenario where I absolutely know that I am always creating a new instance of objects to be persisted? Óscar López It's not a good idea using merge when a persist

Symfony add data to object on pre persist

假如想象 提交于 2019-12-02 10:53:04
问题 I have a form to create documents. On the one side I can add names and descriptions and next to that I can select one or several agencies to whom the created document belongs. Each of the agencies is assigned to one specific market (there are 7 markets in total, so one market can have several agencies but one agency belongs only to one market!) What I want to achieve is a "prePersist" function which automatically adds the correct market(s) (depending on the number of agencies selected) to the